Sample records for fpgas field programmable

  1. Defense Industrial Base Assessment: U.S. Integrated Circuit Design and Fabrication Capability

    DTIC Science & Technology

    2009-05-01

    in the U.S for the period 2003-2006, with projections to 2011.6 The resulting draft OTE survey was field tested for accuracy and usability with a...custom application specific integrated circuits (ASICs) to field programmable gate arrays (FPGAs). Companies of all sizes can manufacture these IC...able to design one-time Electronically Programmable Gate Arrays (EPGAs) while nine are able to design Field Programmable Gate Arrays (FPGAs). Eight

  2. Radiation testing campaign results for understanding the suitability of FPGAs in detector electronics

    DOE PAGES

    Citterio, M.; Camplani, A.; Cannon, M.; ...

    2015-11-19

    SRAM based Field Programmable Gate Arrays (FPGAs) have been rarely used in High Energy Physics (HEP) due to their sensitivity to radiation. The last generation of commercial FPGAs based on 28 nm feature size and on Silicon On Insulator (SOI) technologies are more tolerant to radiation to the level that their use in front-end electronics is now feasible. FPGAs provide re-programmability, high-speed computation and fast data transmission through the embedded serial transceivers. They could replace custom application specific integrated circuits in front end electronics in locations with moderate radiation field. Finally, the use of a FPGA in HEP experiments ismore » only limited by our ability to mitigate single event effects induced by the high energy hadrons present in the radiation field.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Citterio, M.; Camplani, A.; Cannon, M.

    SRAM based Field Programmable Gate Arrays (FPGAs) have been rarely used in High Energy Physics (HEP) due to their sensitivity to radiation. The last generation of commercial FPGAs based on 28 nm feature size and on Silicon On Insulator (SOI) technologies are more tolerant to radiation to the level that their use in front-end electronics is now feasible. FPGAs provide re-programmability, high-speed computation and fast data transmission through the embedded serial transceivers. They could replace custom application specific integrated circuits in front end electronics in locations with moderate radiation field. Finally, the use of a FPGA in HEP experiments ismore » only limited by our ability to mitigate single event effects induced by the high energy hadrons present in the radiation field.« less

  4. Implementing a Microcontroller Watchdog with a Field-Programmable Gate Array (FPGA)

    NASA Technical Reports Server (NTRS)

    Straka, Bartholomew

    2013-01-01

    Reliability is crucial to safety. Redundancy of important system components greatly enhances reliability and hence safety. Field-Programmable Gate Arrays (FPGAs) are useful for monitoring systems and handling the logic necessary to keep them running with minimal interruption when individual components fail. A complete microcontroller watchdog with logic for failure handling can be implemented in a hardware description language (HDL.). HDL-based designs are vendor-independent and can be used on many FPGAs with low overhead.

  5. Field Programmable Gate Array for Implementation of Redundant Advanced Digital Feedback Control

    NASA Technical Reports Server (NTRS)

    King, K. D.

    2003-01-01

    The goal of this effort was to develop a digital motor controller using field programmable gate arrays (FPGAs). This is a more rugged approach than a conventional microprocessor digital controller. FPGAs typically have higher radiation (rad) tolerance than both the microprocessor and memory required for a conventional digital controller. Furthermore, FPGAs can typically operate at higher speeds. (While speed is usually not an issue for motor controllers, it can be for other system controllers.) Other than motor power, only a 3.3-V digital power supply was used in the controller; no analog bias supplies were used. Since most of the circuit was implemented in the FPGA, no additional parts were needed other than the power transistors to drive the motor. The benefits that FPGAs provide over conventional designs-lower power and fewer parts-allow for smaller packaging and reduced weight and cost.

  6. Qualification Strategies of Field Programmable Gate Arrays (FPGAs) for Space Application

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas; Schone, Harald

    2005-01-01

    This viewgraph document reviews the issue of using Field Programmable Gate Arrays (FPGAs) in Space Application, and the some of the strategies for qualifying the FPGA. Qualification and risk management of such complex systems requires new approaches. The paper presents a matrix approach to qualification has been presented that: - Complements historical specifications - Highlights the importance of device physics as a cornerstone to qualification. - Provides levels of risk management that expressly document trade offs. - Stresses the role of the FPGA vendor as team member in the development of modern spacecraft.

  7. Radiation effects in reconfigurable FPGAs

    NASA Astrophysics Data System (ADS)

    Quinn, Heather

    2017-04-01

    Field-programmable gate arrays (FPGAs) are co-processing hardware used in image and signal processing. FPGA are programmed with custom implementations of an algorithm. These algorithms are highly parallel hardware designs that are faster than software implementations. This flexibility and speed has made FPGAs attractive for many space programs that need in situ, high-speed signal processing for data categorization and data compression. Most commercial FPGAs are affected by the space radiation environment, though. Problems with TID has restricted the use of flash-based FPGAs. Static random access memory based FPGAs must be mitigated to suppress errors from single-event upsets. This paper provides a review of radiation effects issues in reconfigurable FPGAs and discusses methods for mitigating these problems. With careful design it is possible to use these components effectively and resiliently.

  8. Applied Digital Logic Exercises Using FPGAs

    NASA Astrophysics Data System (ADS)

    Wick, Kurt

    2017-09-01

    Applied Digital Logic Exercises Using FPGAs is appropriate for anyone interested in digital logic who needs to learn how to implement it through detailed exercises with state-of-the-art digital design tools and components. The book exposes readers to combinational and sequential digital logic concepts and implements them with hands-on exercises using the Verilog Hardware Description Language (HDL) and a Field Programmable Gate Arrays (FGPA) teaching board.

  9. An FPGA-based instrumentation platform for use at deep cryogenic temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conway Lamb, I. D.; Colless, J. I.; Hornibrook, J. M.

    2016-01-15

    We describe the operation of a cryogenic instrumentation platform incorporating commercially available field-programmable gate arrays (FPGAs). The functionality of the FPGAs at temperatures approaching 4 K enables signal routing, multiplexing, and complex digital signal processing in close proximity to cooled devices or detectors within the cryostat. The performance of the FPGAs in a cryogenic environment is evaluated, including clock speed, error rates, and power consumption. Although constructed for the purpose of controlling and reading out quantum computing devices with low latency, the instrument is generic enough to be of broad use in a range of cryogenic applications.

  10. A New Partial Reconfiguration-Based Fault-Injection System to Evaluate SEU Effects in SRAM-Based FPGAs

    NASA Astrophysics Data System (ADS)

    Sterpone, L.; Violante, M.

    2007-08-01

    Modern SRAM-based field programmable gate array (FPGA) devices offer high capability in implementing complex system. Unfortunately, SRAM-based FPGAs are extremely sensitive to single event upsets (SEUs) induced by radiation particles. In order to successfully deploy safety- or mission-critical applications, designer need to validate the correctness of the obtained designs. In this paper we describe a system based on partial-reconfiguration for running fault-injection experiments within the configuration memory of SRAM-based FPGAs. The proposed fault-injection system uses the internal configuration capabilities that modern FPGAs offer in order to inject SEU within the configuration memory. Detailed experimental results show that the technique is orders of magnitude faster than previously proposed ones.

  11. A Primer for Telemetry Interfacing in Accordance with NASA Standards Using Low Cost FPGAs

    NASA Astrophysics Data System (ADS)

    McCoy, Jake; Schultz, Ted; Tutt, James; Rogers, Thomas; Miles, Drew; McEntaffer, Randall

    2016-03-01

    Photon counting detector systems on sounding rocket payloads often require interfacing asynchronous outputs with a synchronously clocked telemetry (TM) stream. Though this can be handled with an on-board computer, there are several low cost alternatives including custom hardware, microcontrollers and field-programmable gate arrays (FPGAs). This paper outlines how a TM interface (TMIF) for detectors on a sounding rocket with asynchronous parallel digital output can be implemented using low cost FPGAs and minimal custom hardware. Low power consumption and high speed FPGAs are available as commercial off-the-shelf (COTS) products and can be used to develop the main component of the TMIF. Then, only a small amount of additional hardware is required for signal buffering and level translating. This paper also discusses how this system can be tested with a simulated TM chain in the small laboratory setting using FPGAs and COTS specialized data acquisition products.

  12. Evolutionary Based Techniques for Fault Tolerant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Larchev, Gregory V.; Lohn, Jason D.

    2006-01-01

    The use of SRAM-based Field Programmable Gate Arrays (FPGAs) is becoming more and more prevalent in space applications. Commercial-grade FPGAs are potentially susceptible to permanently debilitating Single-Event Latchups (SELs). Repair methods based on Evolutionary Algorithms may be applied to FPGA circuits to enable successful fault recovery. This paper presents the experimental results of applying such methods to repair four commonly used circuits (quadrature decoder, 3-by-3-bit multiplier, 3-by-3-bit adder, 440-7 decoder) into which a number of simulated faults have been introduced. The results suggest that evolutionary repair techniques can improve the process of fault recovery when used instead of or as a supplement to Triple Modular Redundancy (TMR), which is currently the predominant method for mitigating FPGA faults.

  13. A Survey on FPGA-Based Sensor Systems: Towards Intelligent and Reconfigurable Low-Power Sensors for Computer Vision, Control and Signal Processing

    PubMed Central

    García, Gabriel J.; Jara, Carlos A.; Pomares, Jorge; Alabdo, Aiman; Poggi, Lucas M.; Torres, Fernando

    2014-01-01

    The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field. PMID:24691100

  14. A survey on FPGA-based sensor systems: towards intelligent and reconfigurable low-power sensors for computer vision, control and signal processing.

    PubMed

    García, Gabriel J; Jara, Carlos A; Pomares, Jorge; Alabdo, Aiman; Poggi, Lucas M; Torres, Fernando

    2014-03-31

    The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.

  15. Semantically Aware Foundation Environment (SAFE) for Clean-Slate Design of Resilient, Adaptive Secure Hosts (CRASH)

    DTIC Science & Technology

    2016-02-01

    system consists of a high-fidelity hardware simulation using field programmable gate arrays (FPGAs), with a set of runtime services (ConcreteWare...perimeter protection, patch, and pray” is not aligned with the threat. Programmers will not bail us out of this situation (by writing defect free code...hosted on a Field Programmable Gate Array (FPGA), with a set of runtime services (concreteware) running on the hardware. Secure applications can be

  16. Single Event Analysis and Fault Injection Techniques Targeting Complex Designs Implemented in Xilinx-Virtex Family Field Programmable Gate Array (FPGA) Devices

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; LaBel, Kenneth; Kim, Hak

    2014-01-01

    An informative session regarding SRAM FPGA basics. Presenting a framework for fault injection techniques applied to Xilinx Field Programmable Gate Arrays (FPGAs). Introduce an overlooked time component that illustrates fault injection is impractical for most real designs as a stand-alone characterization tool. Demonstrate procedures that benefit from fault injection error analysis.

  17. A SEU-Hard Flip-Flop for Antifuse FPGAs

    NASA Technical Reports Server (NTRS)

    Katz, R.; Wang, J. J.; McCollum, J.; Cronquist, B.; Chan, R.; Yu, D.; Kleyner, I.; Day, John H. (Technical Monitor)

    2001-01-01

    A single event upset (SEU)-hardened flip-flop has been designed and developed for antifuse Field Programmable Gate Array (FPGA) application. Design and application issues, testability, test methods, simulation, and results are discussed.

  18. Field Programmable Gate Array Reliability Analysis Guidelines for Launch Vehicle Reliability Block Diagrams

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  19. Digital intermediate frequency QAM modulator using parallel processing

    DOEpatents

    Pao, Hsueh-Yuan [Livermore, CA; Tran, Binh-Nien [San Ramon, CA

    2008-05-27

    The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.

  20. Flexible Architecture for FPGAs in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Clark, Duane I.; Lim, Chester N.

    2012-01-01

    Commonly, field-programmable gate arrays (FPGAs) being developed in cPCI embedded systems include the bus interface in the FPGA. This complicates the development because the interface is complicated and requires a lot of development time and FPGA resources. In addition, flight qualification requires a substantial amount of time be devoted to just this interface. Another complication of putting the cPCI interface into the FPGA being developed is that configuration information loaded into the device by the cPCI microprocessor is lost when a new bit file is loaded, requiring cumbersome operations to return the system to an operational state. Finally, SRAM-based FPGAs are typically programmed via specialized cables and software, with programming files being loaded either directly into the FPGA, or into PROM devices. This can be cumbersome when doing FPGA development in an embedded environment, and does not have an easy path to flight. Currently, FPGAs used in space applications are usually programmed via multiple space-qualified PROM devices that are physically large and require extra circuitry (typically including a separate one-time programmable FPGA) to enable them to be used for this application. This technology adds a cPCI interface device with a simple, flexible, high-performance backend interface supporting multiple backend FPGAs. It includes a mechanism for programming the FPGAs directly via the microprocessor in the embedded system, eliminating specialized hardware, software, and PROM devices and their associated circuitry. It has a direct path to flight, and no extra hardware and minimal software are required to support reprogramming in flight. The device added is currently a small FPGA, but an advantage of this technology is that the design of the device does not change, regardless of the application in which it is being used. This means that it needs to be qualified for flight only once, and is suitable for one-time programmable devices or an application specific integrated circuit (ASIC). An application programming interface (API) further reduces the development time needed to use the interface device in a system.

  1. Validation techniques for fault emulation of SRAM-based FPGAs

    DOE PAGES

    Quinn, Heather; Wirthlin, Michael

    2015-08-07

    A variety of fault emulation systems have been created to study the effect of single-event effects (SEEs) in static random access memory (SRAM) based field-programmable gate arrays (FPGAs). These systems are useful for augmenting radiation-hardness assurance (RHA) methodologies for verifying the effectiveness for mitigation techniques; understanding error signatures and failure modes in FPGAs; and failure rate estimation. For radiation effects researchers, it is important that these systems properly emulate how SEEs manifest in FPGAs. If the fault emulation systems does not mimic the radiation environment, the system will generate erroneous data and incorrect predictions of behavior of the FPGA inmore » a radiation environment. Validation determines whether the emulated faults are reasonable analogs to the radiation-induced faults. In this study we present methods for validating fault emulation systems and provide several examples of validated FPGA fault emulation systems.« less

  2. Start Up Application Concerns with Field Programmable Gate Arrays (FPGAs)

    NASA Technical Reports Server (NTRS)

    Katz, Richard B.

    1999-01-01

    This note is being published to improve the visibility of this subject, as we continue to see problems surface in designs, as well as to add additional information to the previously published note for design engineers. The original application note focused on designing systems with no single point failures using Actel Field Programmable Gate Arrays (FPGAs) for critical applications. Included in that note were the basic principles of operation of the Actel FPGA and a discussion of potential single-point failures. The note also discussed the issue of startup transients for that class of device. It is unfortunate that we continue to see some design problems using these devices. This note will focus on the startup properties of certain electronic components, in general, and current Actel FPGAs, in particular. Devices that are "power-on friendly" are currently being developed by Actel, as a variant of the new SX series of FPGAs. In the ideal world, electronic components would behave much differently than they do in the real world, The chain, of course, starts with the power supply. Ideally, the voltage will immediately rise to a stable V(sub cc) level, of course, it does not. Aside from practical design considerations, inrush current limits of certain capacitors must be observed and the power supply's output may be intentionally slew rate limited to prevent a large current spike on the system power bus. In any event, power supply rise time may range from less than I msec to 100 msec or more.

  3. An acceleration framework for synthetic aperture radar algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.

    2017-04-01

    Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.

  4. Spaceborne Hybrid-FPGA System for Processing FTIR Data

    NASA Technical Reports Server (NTRS)

    Bekker, Dmitriy; Blavier, Jean-Francois L.; Pingree, Paula J.; Lukowiak, Marcin; Shaaban, Muhammad

    2008-01-01

    Progress has been made in a continuing effort to develop a spaceborne computer system for processing readout data from a Fourier-transform infrared (FTIR) spectrometer to reduce the volume of data transmitted to Earth. The approach followed in this effort, oriented toward reducing design time and reducing the size and weight of the spectrometer electronics, has been to exploit the versatility of recently developed hybrid field-programmable gate arrays (FPGAs) to run diverse software on embedded processors while also taking advantage of the reconfigurable hardware resources of the FPGAs.

  5. Mitigating Upsets in SRAM-Based FPGAs from the Xilinx Virtex 2 Family

    NASA Technical Reports Server (NTRS)

    Swift, G. M.; Yui, C. C.; Carmichael, C.; Koga, R.; George, J. S.

    2003-01-01

    Static random access memory (SRAM) upset rates in field programmable gate arrays (FPGAs) from the Xilinx Virtex 2 family have been tested for radiation effects on configuration memory, block RAM and the power-on-reset (POR) and SelectMAP single event functional interrupts (SEFIs). Dynamic testing has shown the effectiveness and value of Triple Module Redundancy (TMR) and partial reconfiguration when used in conjunction. Continuing dynamic testing for more complex designs and other Virtex 2 capabilities (i.e., I/O standards, digital clock managers (DCM), etc.) is scheduled.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dondero, Rachel Elizabeth

    The increased use of Field Programmable Gate Arrays (FPGAs) in critical systems brings new challenges in securing the diversely programmable fabric from cyber-attacks. FPGAs are an inexpensive, efficient, and flexible alternative to Application Specific Integrated Circuits (ASICs), which are becoming increasingly expensive and impractical for low volume manufacturing as technology nodes continue to shrink. Unfortunately, FPGAs are not designed for high security applications, and their high-flexibility lends itself to low security and vulnerability to malicious attacks. Similar to securing an ASIC’s functionality, FPGA programmers can exploit the inherent randomness introduced into hardware structures during fabrication for security applications. Physically Unclonablemore » Functions (PUFs) are one such solution that uses the die specific variability in hardware fabrication for both secret key generation and verification. PUFs strive to be random, unique, and reliable. Throughout recent years many PUF structures have been presented to try and maximize these three design constraints, reliability being the most difficult of the three to achieve. This thesis presents a new PUF structure that combines two elementary PUF concepts (a bi-stable SRAM PUF and a delay-based arbiter PUF) to create a PUF with increased reliability, while maintaining both random and unique qualities. Properties of the new PUF will be discussed as well as the various design modifications that can be made to tweak the desired performance and overhead.« less

  7. Mitigating Upsets in SRAM Based FPGAs from the Xilinix Virtex 2 Family

    NASA Technical Reports Server (NTRS)

    Swift, Gary M.; Yui, Candice C.; Carmichael, Carl; Koga, Rocky; George, Jeffrey S.

    2003-01-01

    This slide presentation reviews the single event upset static testing of the Virtex II field programmable gate arrays (FPGA) that were tested in protons and heavy-ions. The test designs and static and dynamic test results are reviewed.

  8. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  9. The IMPACT Common Module - A Low Cost, Reconfigurable Building Block for Next Generation Phased Arrays

    DTIC Science & Technology

    2016-03-31

    The SiGe receiver has two stages of programmable RF filtering and one stage of IF filtering. Each filter can be tuned in center frequency and...distribution unlimited. transmit, with an IF to RF upconversion chain that is split to programmable phase shifters and VGAs at each output port. Figure 2...These are optimized to run on medium grade Field Programmable Gate Arrays (FPGAs), such as the Altera Arria 10, and represent a few of the many

  10. A Frequency Agile, Self-Adaptive Serial Link on Xilinx FPGAs

    NASA Astrophysics Data System (ADS)

    Aloisio, A.; Giordano, R.; Izzo, V.; Perrella, S.

    2015-06-01

    In this paper, we focused on the GTX transceiver modules of Xilinx Kintex 7 field-programmable gate arrays (FPGAs), which provide high bandwidth, low jitter on the recovered clock, and an equalization system on the transmitter and the receiver. We present a frequency agile, auto-adaptive serial link. The link is able to take care of the reconfiguration of the GTX parameters in order to fully benefit from the available link bandwidth, by setting the highest line rate. It is designed around an FPGA-embedded microprocessor, which drives the programmable ports of the GTX in order to control the quality of the received data and to easily calculate the bit-error rate in each sampling point of the eye diagram. We present the self-adaptive link project, the description of the test system, and the main results.

  11. Magnetics and Power System Upgrades for the Pegasus-U Experiment

    NASA Astrophysics Data System (ADS)

    Preston, R. C.; Bongard, M. W.; Fonck, R. J.; Lewicki, B. T.

    2014-10-01

    To support the missions of developing local helicity injection startup and exploiting advanced tokamak physics studies at near unity aspect ratio, the proposed Pegasus-U will include expanded magnetic systems and associated power supplies. A new centerstack increases the toroidal field seven times to 1 T and the volt-seconds by a factor of six while maintaining operation at an aspect ratio of 1.2. The poloidal field magnet system is expanded to support improved shape control and robust double or single null divertor operation at the full plasma current of 0.3 MA. An integrated digital control system based on Field Programmable Gate Arrays (FPGAs) provides active feedback control of all magnet currents. Implementation of the FPGAs is achieved with modular noise reducing electronics. The digital feedback controllers replace the existing analog systems and switch multiplexing technology. This will reduce noise sensitivity and allow the operational Ohmic power supply voltage to increase from 2100 V to its maximum capacity of 2400 V. The feedback controller replacement also allows frequency control for ``freewheeling''--stopping the switching for a short interval and allowing the current to coast. The FPGAs assist in optimizing pulse length by having programmable switching events to minimize energy losses. They also allow for more efficient switching topologies that provide improved stored energy utilization, and support increasing the pulse length from 25 ms to 50-100 ms. Work supported by US DOE Grant DE-FG02-96ER54375.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Wirthlin, Michael

    A variety of fault emulation systems have been created to study the effect of single-event effects (SEEs) in static random access memory (SRAM) based field-programmable gate arrays (FPGAs). These systems are useful for augmenting radiation-hardness assurance (RHA) methodologies for verifying the effectiveness for mitigation techniques; understanding error signatures and failure modes in FPGAs; and failure rate estimation. For radiation effects researchers, it is important that these systems properly emulate how SEEs manifest in FPGAs. If the fault emulation systems does not mimic the radiation environment, the system will generate erroneous data and incorrect predictions of behavior of the FPGA inmore » a radiation environment. Validation determines whether the emulated faults are reasonable analogs to the radiation-induced faults. In this study we present methods for validating fault emulation systems and provide several examples of validated FPGA fault emulation systems.« less

  13. Implementation of a Configurable Fault Tolerant Processor (CFTP) Using Internal Triple Modular Redundancy (TMR)

    DTIC Science & Technology

    2005-12-01

    Upsets in SRAM FPGAs,” Military and Aerospace Applications of Programmable Logic Devices, September 2002. 8. Wakerly , John F,. “Microcomputer...change. The goal of the Configurable Fault Tolerant Processor (CFTP) Project is to explore, develop and demonstrate the applicability of using off-the...develop and demonstrate the applicability of using commercial-of-the-shelf (COTS) Field Programmable Gate Arrays (FPGA) in the design of

  14. Initial Single Event Effects Testing of the Xilinx Virtex-4 Field Programmable Gate Array

    NASA Technical Reports Server (NTRS)

    Allen, Gregory R.; Swift, Gary M.; Carmichael, C.; Tseng, C.

    2007-01-01

    We present initial results for the thin epitaxial Xilinx Virtex-4 Fie ld Programmable Gate Array (FPGA), and compare to previous results ob tained for the Virtex-II and Virtex-II Pro. The data presented was a cquired through a consortium based effort with the common goal of pr oviding the space community with data and mitigation methods for the use of Xilinx FPGAs in space.

  15. Remotely Powered Reconfigurable Receiver for Extreme Environment Sensing Platforms

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas J.

    2012-01-01

    Wireless sensors connected in a local network offer revolutionary exploration capabilities, but the current solutions do not work in extreme environments of low temperatures (200K) and low to moderate radiation levels (<50 krad). These sensors (temperature, radiation, infrared, etc.) would need to operate outside the spacecraft/ lander and be totally independent of power from the spacecraft/lander. Flash memory field-programmable gate arrays (FPGAs) are being used as the main signal processing and protocol generation platform in a new receiver. Flash-based FPGAs have been shown to have at least 100 reduced standby power and 10 reduction operating power when compared to normal SRAM-based FPGA technology.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave ordermore » of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5« less

  17. The electronics system for the LBNL positron emission mammography (PEM) camera

    NASA Astrophysics Data System (ADS)

    Moses, W. W.; Young, J. W.; Baker, K.; Jones, W.; Lenox, M.; Ho, M. H.; Weng, M.

    2001-06-01

    Describes the electronics for a high-performance positron emission mammography (PEM) camera. It is based on the electronics for a human brain positron emission tomography (PET) camera (the Siemens/CTI HRRT), modified to use a detector module that incorporates a photodiode (PD) array. An application-specified integrated circuit (ASIC) services the photodetector (PD) array, amplifying its signal and identifying the crystal of interaction. Another ASIC services the photomultiplier tube (PMT), measuring its output and providing a timing signal. Field-programmable gate arrays (FPGAs) and lookup RAMs are used to apply crystal-by-crystal correction factors and measure the energy deposit and the interaction depth (based on the PD/PMT ratio). Additional FPGAs provide event multiplexing, derandomization, coincidence detection, and real-time rebinning. Embedded PC/104 microprocessors provide communication, real-time control, and configure the system. Extensive use of FPGAs make the overall design extremely flexible, allowing many different functions (or design modifications) to be realized without hardware changes. Incorporation of extensive onboard diagnostics, implemented in the FPGAs, is required by the very high level of integration and density achieved by this system.

  18. Three Realizations and Comparison of Hardware for Piezoresistive Tactile Sensors

    PubMed Central

    Vidal-Verdú, Fernando; Oballe-Peinado, Óscar; Sánchez-Durán, José A.; Castellanos-Ramos, Julián; Navas-González, Rafael

    2011-01-01

    Tactile sensors are basically arrays of force sensors that are intended to emulate the skin in applications such as assistive robotics. Local electronics are usually implemented to reduce errors and interference caused by long wires. Realizations based on standard microcontrollers, Programmable Systems on Chip (PSoCs) and Field Programmable Gate Arrays (FPGAs) have been proposed by the authors for the case of piezoresistive tactile sensors. The solution employing FPGAs is especially relevant since their performance is closer to that of Application Specific Integrated Circuits (ASICs) than that of the other devices. This paper presents an implementation of such an idea for a specific sensor. For the purpose of comparison, the circuitry based on the other devices is also made for the same sensor. This paper discusses the implementation issues, provides details regarding the design of the hardware based on the three devices and compares them. PMID:22163797

  19. Commercial Parts Radiation Testing

    DTIC Science & Technology

    2015-01-13

    New Mexico’s COSMIAC Center performed radiation testing on a series of operational amplifiers, microcontrollers and microprocessor. The...commercial microcontroller and microprocessor equipment. The team would develop a list of the most promising commercial parts that might be utilized to...parts will include microprocessors, microcontrollers and memory modules. In addition, Field Programmable Gate Arrays (FPGAs) will also be chosen

  20. FPGAs in Space Environment and Design Techniques

    NASA Technical Reports Server (NTRS)

    Katz, Richard B.; Day, John H. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of Field Programmable Gate Arrays (FPGA) in the space environment and design techniques. Details are given on the effects of the space radiation environment, total radiation dose, single event upset, single event latchup, single event transient, antifuse technology and gate rupture, proton upsets and sensitivity, and loss of functionality.

  1. A self-timed multipurpose delay sensor for Field Programmable Gate Arrays (FPGAs).

    PubMed

    Osuna, Carlos Gómez; Ituero, Pablo; López-Vallejo, Marisa

    2013-12-20

    This paper presents a novel self-timed multi-purpose sensor especially conceived for Field Programmable Gate Arrays (FPGAs). The aim of the sensor is to measure performance variations during the life-cycle of the device, such as process variability, critical path timing and temperature variations. The proposed topology, through the use of both combinational and sequential FPGA elements, amplifies the time of a signal traversing a delay chain to produce a pulse whose width is the sensor's measurement. The sensor is fully self-timed, avoiding the need for clock distribution networks and eliminating the limitations imposed by the system clock. One single off- or on-chip time-to-digital converter is able to perform digitization of several sensors in a single operation. These features allow for a simplified approach for designers wanting to intertwine a multi-purpose sensor network with their application logic. Employed as a temperature sensor, it has been measured to have an error of  ±0.67 °C, over the range of 20-100 °C, employing 20 logic elements with a 2-point calibration.

  2. A Self-Timed Multipurpose Delay Sensor for Field Programmable Gate Arrays (FPGAs)

    PubMed Central

    Osuna, Carlos Gómez; Ituero, Pablo; López-Vallejo, Marisa

    2014-01-01

    This paper presents a novel self-timed multi-purpose sensor especially conceived for Field Programmable Gate Arrays (FPGAs). The aim of the sensor is to measure performance variations during the life-cycle of the device, such as process variability, critical path timing and temperature variations. The proposed topology, through the use of both combinational and sequential FPGA elements, amplifies the time of a signal traversing a delay chain to produce a pulse whose width is the sensor's measurement. The sensor is fully self-timed, avoiding the need for clock distribution networks and eliminating the limitations imposed by the system clock. One single off- or on-chip time-to-digital converter is able to perform digitization of several sensors in a single operation. These features allow for a simplified approach for designers wanting to intertwine a multi-purpose sensor network with their application logic. Employed as a temperature sensor, it has been measured to have an error of ±0.67 °C, over the range of 20–100 °C, employing 20 logic elements with a 2-point calibration. PMID:24361927

  3. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  4. FPGA implementation of bit controller in double-tick architecture

    NASA Astrophysics Data System (ADS)

    Kobylecki, Michał; Kania, Dariusz

    2017-11-01

    This paper presents a comparison of the two original architectures of programmable bit controllers built on FPGAs. Programmable Logic Controllers (which include, among other things programmable bit controllers) built on FPGAs provide a efficient alternative to the controllers based on microprocessors which are expensive and often too slow. The presented and compared methods allow for the efficient implementation of any bit control algorithm written in Ladder Diagram language into the programmable logic system in accordance with IEC61131-3. In both cases, we have compared the effect of the applied architecture on the performance of executing the same bit control program in relation to its own size.

  5. Method and infrastructure for cycle-reproducible simulation on large scale digital circuits on a coordinated set of field-programmable gate arrays (FPGAs)

    DOEpatents

    Asaad, Sameh W; Bellofatto, Ralph E; Brezzo, Bernard; Haymes, Charles L; Kapur, Mohit; Parker, Benjamin D; Roewer, Thomas; Tierno, Jose A

    2014-01-28

    A plurality of target field programmable gate arrays are interconnected in accordance with a connection topology and map portions of a target system. A control module is coupled to the plurality of target field programmable gate arrays. A balanced clock distribution network is configured to distribute a reference clock signal, and a balanced reset distribution network is coupled to the control module and configured to distribute a reset signal to the plurality of target field programmable gate arrays. The control module and the balanced reset distribution network are cooperatively configured to initiate and control a simulation of the target system with the plurality of target field programmable gate arrays. A plurality of local clock control state machines reside in the target field programmable gate arrays. The local clock state machines are configured to generate a set of synchronized free-running and stoppable clocks to maintain cycle-accurate and cycle-reproducible execution of the simulation of the target system. A method is also provided.

  6. FPGA-Based Filterbank Implementation for Parallel Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Berner, Stephan; DeLeon, Phillip

    1999-01-01

    One approach to parallel digital signal processing decomposes a high bandwidth signal into multiple lower bandwidth (rate) signals by an analysis bank. After processing, the subband signals are recombined into a fullband output signal by a synthesis bank. This paper describes an implementation of the analysis and synthesis banks using (Field Programmable Gate Arrays) FPGAs.

  7. Rad-Hard Structured ASIC Body of Knowledge

    NASA Technical Reports Server (NTRS)

    Heidecker, Jason

    2013-01-01

    Structured Application-Specific Integrated Circuit (ASIC) technology is a platform between traditional ASICs and Field-Programmable Gate Arrays (FPGA). The motivation behind structured ASICs is to combine the low nonrecurring engineering costs (NRE) costs of FPGAs with the high performance of ASICs. This report provides an overview of the structured ASIC platforms that are radiation-hardened and intended for space application

  8. Field Programmable Gate Aray (FPGA) Radiation Data: All Data is Not Equal

    NASA Technical Reports Server (NTRS)

    Label, Kenneth A.; Berg, Melanie D.

    2016-01-01

    Electronic parts (integrated circuits) have grown in complexity such that determining all failure modes and risks based on single particle event radiation testing is impossible. In this presentation, the authors will present why this is so and provide some realism on what this means to FPGAs. Its all about understanding actual risks and not making assumptions.

  9. Using Multiple FPGA Architectures for Real-time Processing of Low-level Machine Vision Functions

    Treesearch

    Thomas H. Drayer; William E. King; Philip A. Araman; Joseph G. Tront; Richard W. Conners

    1995-01-01

    In this paper, we investigate the use of multiple Field Programmable Gate Array (FPGA) architectures for real-time machine vision processing. The use of FPGAs for low-level processing represents an excellent tradeoff between software and special purpose hardware implementations. A library of modules that implement common low-level machine vision operations is presented...

  10. Leveraging FPGAs for Accelerating Short Read Alignment.

    PubMed

    Arram, James; Kaplan, Thomas; Luk, Wayne; Jiang, Peiyong

    2017-01-01

    One of the key challenges facing genomics today is how to efficiently analyze the massive amounts of data produced by next-generation sequencing platforms. With general-purpose computing systems struggling to address this challenge, specialized processors such as the Field-Programmable Gate Array (FPGA) are receiving growing interest. The means by which to leverage this technology for accelerating genomic data analysis is however largely unexplored. In this paper, we present a runtime reconfigurable architecture for accelerating short read alignment using FPGAs. This architecture exploits the reconfigurability of FPGAs to allow the development of fast yet flexible alignment designs. We apply this architecture to develop an alignment design which supports exact and approximate alignment with up to two mismatches. Our design is based on the FM-index, with optimizations to improve the alignment performance. In particular, the n-step FM-index, index oversampling, a seed-and-compare stage, and bi-directional backtracking are included. Our design is implemented and evaluated on a 1U Maxeler MPC-X2000 dataflow node with eight Altera Stratix-V FPGAs. Measurements show that our design is 28 times faster than Bowtie2 running with 16 threads on dual Intel Xeon E5-2640 CPUs, and nine times faster than Soap3-dp running on an NVIDIA Tesla C2070 GPU.

  11. G(sup 4)FET Implementations of Some Logic Circuits

    NASA Technical Reports Server (NTRS)

    Mojarradi, Mohammad; Akarvardar, Kerem; Cristoleveanu, Sorin; Gentil, Paul; Blalock, Benjamin; Chen, Suhan

    2009-01-01

    Some logic circuits have been built and demonstrated to work substantially as intended, all as part of a continuing effort to exploit the high degrees of design flexibility and functionality of the electronic devices known as G(sup 4)FETs and described below. These logic circuits are intended to serve as prototypes of more complex advanced programmable-logicdevice-type integrated circuits, including field-programmable gate arrays (FPGAs). In comparison with prior FPGAs, these advanced FPGAs could be much more efficient because the functionality of G(sup 4)FETs is such that fewer discrete components are needed to perform a given logic function in G(sup 4)FET circuitry than are needed perform the same logic function in conventional transistor-based circuitry. The underlying concept of using G(sup 4)FETs as building blocks of programmable logic circuitry was also described, from a different perspective, in G(sup 4)FETs as Universal and Programmable Logic Gates (NPO-41698), NASA Tech Briefs, Vol. 31, No. 7 (July 2007), page 44. A G(sup 4)FET can be characterized as an accumulation-mode silicon-on-insulator (SOI) metal oxide/semiconductor field-effect transistor (MOSFET) featuring two junction field-effect transistor (JFET) gates. The structure of a G(sup 4)FET (see Figure 1) is the same as that of a p-channel inversion-mode SOI MOSFET with two body contacts on each side of the channel. The top gate (G1), the substrate emulating a back gate (G2), and the junction gates (JG1 and JG2) can be biased independently of each other and, hence, each can be used to independently control some aspects of the conduction characteristics of the transistor. The independence of the actions of the four gates is what affords the enhanced functionality and design flexibility of G(sup 4)FETs. The present G(sup 4)FET logic circuits include an adjustable-threshold inverter, a real-time-reconfigurable logic gate, and a dynamic random-access memory (DRAM) cell (see Figure 2). The configuration of the adjustable-threshold inverter is similar to that of an ordinary complementary metal oxide semiconductor (CMOS) inverter except that an NMOSFET (a MOSFET having an n-doped channel and a p-doped Si substrate) is replaced by an n-channel G(sup 4)FET

  12. Determining the Best-Fit FPGA for a Space Mission: An Analysis of Cost, SEU Sensitivity,and Reliability

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Ken

    2007-01-01

    This viewgraph presentation reviews the selection of the optimum Field Programmable Gate Arrays (FPGA) for space missions. Included in this review is a discussion on differentiating amongst various FPGAs, cost analysis of the various options, the investigation of radiation effects, an expansion of the evaluation criteria, and the application of the evaluation criteria to the selection process.

  13. Field programmable gate array processing of eye-safe all-fiber coherent wind Doppler lidar return signals

    NASA Astrophysics Data System (ADS)

    Abdelazim, S.; Santoro, D.; Arend, M.; Moshary, F.; Ahmed, S.

    2011-11-01

    A field deployable all-fiber eye-safe Coherent Doppler LIDAR is being developed at the Optical Remote Sensing Lab at the City College of New York (CCNY) and is designed to monitor wind fields autonomously and continuously in urban settings. Data acquisition is accomplished by sampling lidar return signals at 400 MHz and performing onboard processing using field programmable gate arrays (FPGAs). The FPGA is programmed to accumulate signal information that is used to calculate the power spectrum of the atmospherically back scattered signal. The advantage of using FPGA is that signal processing will be performed at the hardware level, reducing the load on the host computer and allowing for 100% return signal processing. An experimental setup measured wind speeds at ranges of up to 3 km.

  14. Radiation Effects on Current Field Programmable Technologies

    NASA Technical Reports Server (NTRS)

    Katz, R.; LaBel, K.; Wang, J. J.; Cronquist, B.; Koga, R.; Penzin, S.; Swift, G.

    1997-01-01

    Manufacturers of field programmable gate arrays (FPGAS) take different technological and architectural approaches that directly affect radiation performance. Similar y technological and architectural features are used in related technologies such as programmable substrates and quick-turn application specific integrated circuits (ASICs). After analyzing current technologies and architectures and their radiation-effects implications, this paper includes extensive test data quantifying various devices total dose and single event susceptibilities, including performance degradation effects and temporary or permanent re-configuration faults. Test results will concentrate on recent technologies being used in space flight electronic systems and those being developed for use in the near term. This paper will provide the first extensive study of various configuration memories used in programmable devices. Radiation performance limits and their impacts will be discussed for each design. In addition, the interplay between device scaling, process, bias voltage, design, and architecture will be explored. Lastly, areas of ongoing research will be discussed.

  15. An FPGA computing demo core for space charge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less

  16. Novel processor architecture for onboard infrared sensors

    NASA Astrophysics Data System (ADS)

    Hihara, Hiroki; Iwasaki, Akira; Tamagawa, Nobuo; Kuribayashi, Mitsunobu; Hashimoto, Masanori; Mitsuyama, Yukio; Ochi, Hiroyuki; Onodera, Hidetoshi; Kanbara, Hiroyuki; Wakabayashi, Kazutoshi; Tada, Munehiro

    2016-09-01

    Infrared sensor system is a major concern for inter-planetary missions that investigate the nature and the formation processes of planets and asteroids. The infrared sensor system requires signal preprocessing functions that compensate for the intensity of infrared image sensors to get high quality data and high compression ratio through the limited capacity of transmission channels towards ground stations. For those implementations, combinations of Field Programmable Gate Arrays (FPGAs) and microprocessors are employed by AKATSUKI, the Venus Climate Orbiter, and HAYABUSA2, the asteroid probe. On the other hand, much smaller size and lower power consumption are demanded for future missions to accommodate more sensors. To fulfill this future demand, we developed a novel processor architecture which consists of reconfigurable cluster cores and programmable-logic cells with complementary atom switches. The complementary atom switches enable hardware programming without configuration memories, and thus soft-error on logic circuit connection is completely eliminated. This is a noteworthy advantage for space applications which cannot be found in conventional re-writable FPGAs. Almost one-tenth of lower power consumption is expected compared to conventional re-writable FPGAs because of the elimination of configuration memories. The proposed processor architecture can be reconfigured by behavioral synthesis with higher level language specification. Consequently, compensation functions are implemented in a single chip without accommodating program memories, which is accompanied with conventional microprocessors, while maintaining the comparable performance. This enables us to embed a processor element on each infrared signal detector output channel.

  17. Radiation-hardened optically reconfigurable gate array exploiting holographic memory characteristics

    NASA Astrophysics Data System (ADS)

    Seto, Daisaku; Watanabe, Minoru

    2015-09-01

    In this paper, we present a proposal for a radiation-hardened optically reconfigurable gate array (ORGA). The ORGA is a type of field programmable gate array (FPGA). The ORGA configuration can be executed by the exploitation of holographic memory characteristics even if 20% of the configuration data are damaged. Moreover, the optoelectronic technology enables the high-speed reconfiguration of the programmable gate array. Such a high-speed reconfiguration can increase the radiation tolerance of its programmable gate array to 9.3 × 104 times higher than that of current FPGAs. Through experimentation, this study clarified the configuration dependability using the impulse-noise emulation and high-speed configuration capabilities of the ORGA with corrupt configuration contexts. Moreover, the radiation tolerance of the programmable gate array was confirmed theoretically through probabilistic calculation.

  18. Analyzing System on A Chip Single Event Upset Responses using Single Event Upset Data, Classical Reliability Models, and Space Environment Data

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We are investigating the application of classical reliability performance metrics combined with standard single event upset (SEU) analysis data. We expect to relate SEU behavior to system performance requirements. Our proposed methodology will provide better prediction of SEU responses in harsh radiation environments with confidence metrics. single event upset (SEU), single event effect (SEE), field programmable gate array devises (FPGAs)

  19. A Memory-Based Programmable Logic Device Using Look-Up Table Cascade with Synchronous Static Random Access Memories

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuyuki; Sasao, Tsutomu; Matsuura, Munehiro; Tanaka, Katsumasa; Yoshizumi, Kenichi; Nakahara, Hiroki; Iguchi, Yukihiro

    2006-04-01

    A large-scale memory-technology-based programmable logic device (PLD) using a look-up table (LUT) cascade is developed in the 0.35-μm standard complementary metal oxide semiconductor (CMOS) logic process. Eight 64 K-bit synchronous SRAMs are connected to form an LUT cascade with a few additional circuits. The features of the LUT cascade include: 1) a flexible cascade connection structure, 2) multi phase pseudo asynchronous operations with synchronous static random access memory (SRAM) cores, and 3) LUT-bypass redundancy. This chip operates at 33 MHz in 8-LUT cascades at 122 mW. Benchmark results show that it achieves a comparable performance to field programmable gate array (FPGAs).

  20. FPGA-Based Front-End Electronics for Positron Emission Tomography

    PubMed Central

    Haselman, Michael; DeWitt, Don; McDougald, Wendy; Lewellen, Thomas K.; Miyaoka, Robert; Hauck, Scott

    2010-01-01

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates above 100MHz. This combined with FPGA’s low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for positron emission tomography (PET) scanners. Our laboratory is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this next generation scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper two such processes, sub-clock rate pulse timing and event localization, will be discussed in detail. We show that timing performed in the FPGA can achieve a resolution that is suitable for small-animal scanners, and will outperform the analog version given a low enough sampling period for the ADC. We will also show that the position of events in the scanner can be determined in real time using a statistical positioning based algorithm. PMID:21961085

  1. Performance evaluation of heart sound cancellation in FPGA hardware implementation for electronic stethoscope.

    PubMed

    Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian

    2014-01-01

    This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  2. Aquarius Digital Processing Unit

    NASA Technical Reports Server (NTRS)

    Forgione, Joshua; Winkert, George; Dobson, Norman

    2009-01-01

    Three documents provide information on a digital processing unit (DPU) for the planned Aquarius mission, in which a radiometer aboard a spacecraft orbiting Earth is to measure radiometric temperatures from which data on sea-surface salinity are to be deduced. The DPU is the interface between the radiometer and an instrument-command-and-data system aboard the spacecraft. The DPU cycles the radiometer through a programmable sequence of states, collects and processes all radiometric data, and collects all housekeeping data pertaining to operation of the radiometer. The documents summarize the DPU design, with emphasis on innovative aspects that include mainly the following: a) In the radiometer and the DPU, conversion from analog voltages to digital data is effected by means of asynchronous voltage-to-frequency converters in combination with a frequency-measurement scheme implemented in field-programmable gate arrays (FPGAs). b) A scheme to compensate for aging and changes in the temperature of the DPU in order to provide an overall temperature-measurement accuracy within 0.01 K includes a high-precision, inexpensive DC temperature measurement scheme and a drift-compensation scheme that was used on the Cassini radar system. c) An interface among multiple FPGAs in the DPU guarantees setup and hold times.

  3. Evaluating De-centralised and Distributional Options for the Distributed Electronic Warfare Situation Awareness and Response Test Bed

    DTIC Science & Technology

    2013-12-01

    effectors (deployed on ground based or aerial platforms) to detect , identify, locate, track or suppress stationary or slow moving surface based RF...ground based or aerial platforms) to detect , identify, locate, track or suppress stationary or slow moving surface based RF emitting targets. In the...Electronic Support EO Electro-Optic FPGAs Field Programmable Gate Arrays IR Infra-red LADAR Laser Detection and Ranging OSX Mac OS X; the apple

  4. Field Programmable Gate Array Failure Rate Estimation Guidelines for Launch Vehicle Fault Tree Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Today's launch vehicles complex electronic and avionics systems heavily utilize Field Programmable Gate Array (FPGA) integrated circuits (IC) for their superb speed and reconfiguration capabilities. Consequently, FPGAs are prevalent ICs in communication protocols such as MILSTD- 1553B and in control signal commands such as in solenoid valve actuations. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  5. Real-time field programmable gate array architecture for computer vision

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar

    2001-01-01

    This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low-level image processing. The field programmable gate array (FPGA)-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and it is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on dedicated very- large-scale-integrated devices to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real-time performance are discussed. Some results are presented and discussed.

  6. Programmable diagnostic devices made from paper and tape.

    PubMed

    Martinez, Andres W; Phillips, Scott T; Nie, Zhihong; Cheng, Chao-Min; Carrilho, Emanuel; Wiley, Benjamin J; Whitesides, George M

    2010-10-07

    This paper describes three-dimensional microfluidic paper-based analytical devices (3-D microPADs) that can be programmed (postfabrication) by the user to generate multiple patterns of flow through them. These devices are programmed by pressing single-use 'on' buttons, using a stylus or a ballpoint pen. Pressing a button closes a small space (gap) between two vertically aligned microfluidic channels, and allows fluids to wick from one channel to the other. These devices are simple to fabricate, and are made entirely out of paper and double-sided adhesive tape. Programmable devices expand the capabilities of microPADs and provide a simple method for controlling the movement of fluids in paper-based channels. They are the conceptual equivalent of field-programmable gate arrays (FPGAs) widely used in electronics.

  7. Optimization of a Fast Neutron Scintillator for Real-Time Pulse Shape Discrimination in the Transient Reactor Test Facility (TREAT) Hodoscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, James T.; Thompson, Scott J.; Watson, Scott M.

    We present a multi-channel, fast neutron/gamma ray detector array system that utilizes ZnS(Ag) scintillator detectors. The system employs field programmable gate arrays (FPGAs) to do real-time all digital neutron/gamma ray discrimination with pulse height and time histograms to allow count rates in excess of 1,000,000 pulses per second per channel. The system detector number is scalable in blocks of 16 channels.

  8. Asymmetric Core Computing for U.S. Army High-Performance Computing Applications

    DTIC Science & Technology

    2009-04-01

    Playstation 4 (should one be announced). 8 4.2 FPGAs Reconfigurable computing refers to performing computations using Field Programmable Gate Arrays...2008 4 . TITLE AND SUBTITLE Asymmetric Core Computing for U.S. Army High-Performance Computing Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER...Acknowledgments vi  1.  Introduction 1  2.  Relevant Technologies 2  3.  Technical Approach 5  4 .  Research and Development Highlights 7  4.1  Cell

  9. Biologically inspired collision avoidance system for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Ortiz, Fernando E.; Graham, Brett; Spagnoli, Kyle; Kelmelis, Eric J.

    2009-05-01

    In this project, we collaborate with researchers in the neuroscience department at the University of Delaware to develop an Field Programmable Gate Array (FPGA)-based embedded computer, inspired by the brains of small vertebrates (fish). The mechanisms of object detection and avoidance in fish have been extensively studied by our Delaware collaborators. The midbrain optic tectum is a biological multimodal navigation controller capable of processing input from all senses that convey spatial information, including vision, audition, touch, and lateral-line (water current sensing in fish). Unfortunately, computational complexity makes these models too slow for use in real-time applications. These simulations are run offline on state-of-the-art desktop computers, presenting a gap between the application and the target platform: a low-power embedded device. EM Photonics has expertise in developing of high-performance computers based on commodity platforms such as graphic cards (GPUs) and FPGAs. FPGAs offer (1) high computational power, low power consumption and small footprint (in line with typical autonomous vehicle constraints), and (2) the ability to implement massively-parallel computational architectures, which can be leveraged to closely emulate biological systems. Combining UD's brain modeling algorithms and the power of FPGAs, this computer enables autonomous navigation in complex environments, and further types of onboard neural processing in future applications.

  10. Field Programmable Gate Array Failure Rate Estimation Guidelines for Launch Vehicle Fault Tree Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven D.; Hatfield, Glen S.; Britton, Paul

    2017-01-01

    Today's launch vehicles complex electronic and avionic systems heavily utilize the Field Programmable Gate Array (FPGA) integrated circuit (IC). FPGAs are prevalent ICs in communication protocols such as MIL-STD-1553B, and in control signal commands such as in solenoid/servo valves actuations. This paper will demonstrate guidelines to estimate FPGA failure rates for a launch vehicle, the guidelines will account for hardware, firmware, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC, FPGA memory and clock. The firmware portion will provide guidelines on the high level FPGA programming language and ways to account for software/code reliability growth. The radiation portion will provide guidelines on environment susceptibility as well as guidelines on tailoring other launch vehicle programs historical data to a specific launch vehicle.

  11. A Re-programmable Platform for Dynamic Burn-in Test of Xilinx Virtexll 3000 FPGA for Military and Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Roosta, Ramin; Wang, Xinchen; Sadigursky, Michael; Tracton, Phil

    2004-01-01

    Field Programmable Gate Arrays (FPGA) have played increasingly important roles in military and aerospace applications. Xilinx SRAM-based FPGAs have been extensively used in commercial applications. They have been used less frequently in space flight applications due to their susceptibility to single-event upsets. Reliability of these devices in space applications is a concern that has not been addressed. The objective of this project is to design a fully programmable hardware/software platform that allows (but is not limited to) comprehensive static/dynamic burn-in test of Virtex-II 3000 FPGAs, at speed test and SEU test. Conventional methods test very few discrete AC parameters (primarily switching) of a given integrated circuit. This approach will test any possible configuration of the FPGA and any associated performance parameters. It allows complete or partial re-programming of the FPGA and verification of the program by using read back followed by dynamic test. Designers have full control over which functional elements of the FPGA to stress. They can completely simulate all possible types of configurations/functions. Another benefit of this platform is that it allows collecting information on elevation of the junction temperature as a function of gate utilization, operating frequency and functionality. A software tool has been implemented to demonstrate the various features of the system. The software consists of three major parts: the parallel interface driver, main system procedure and a graphical user interface (GUI).

  12. A binary link tracker for the BaBar level 1 trigger system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berenyi, A.; Chen, H.K.; Dao, K.

    1999-08-01

    The BaBar detector at PEP-II will operate in a high-luminosity e{sup +}e{sup {minus}} collider environment near the {Upsilon}(4S) resonance with the primary goal of studying CP violation in the B meson system. In this environment, typical physics events of interest involve multiple charged particles. These events are identified by counting these tracks in a fast first level (Level 1) trigger system, by reconstructing the tracks in real time. For this purpose, a Binary Link Tracker Module (BLTM) was designed and fabricated for the BaBar Level 1 Drift Chamber trigger system. The BLTM is responsible for linking track segments, constructed bymore » the Track Segment Finder Modules (TSFM), into complete tracks. A single BLTM module processes a 360 MBytes/s stream of segment hit data, corresponding to information from the entire Drift Chamber, and implements a fast and robust algorithm that tolerates high hit occupancies as well as local inefficiencies of the Drift Chamber. The algorithms and the necessary control logic of the BLTM were implemented in Field Programmable Gate Arrays (FPGAs), using the VHDL hardware description language. The finished 9U x 400 mm Euro-format board contains roughly 75,000 gates of programmable logic or about 10,000 lines of VHDL code synthesized into five FPGAs.« less

  13. A Discussion of Using a Reconfigurable Processor to Implement the Discrete Fourier Transform

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2004-01-01

    This paper presents the design and implementation of the Discrete Fourier Transform (DFT) algorithm on a reconfigurable processor system. While highly applicable to many engineering problems, the DFT is an extremely computationally intensive algorithm. Consequently, the eventual goal of this work is to enhance the execution of a floating-point precision DFT algorithm by off loading the algorithm from the computing system. This computing system, within the context of this research, is a typical high performance desktop computer with an may of field programmable gate arrays (FPGAs). FPGAs are hardware devices that are configured by software to execute an algorithm. If it is desired to change the algorithm, the software is changed to reflect the modification, then download to the FPGA, which is then itself modified. This paper will discuss methodology for developing the DFT algorithm to be implemented on the FPGA. We will discuss the algorithm, the FPGA code effort, and the results to date.

  14. Performance Evaluation of Heart Sound Cancellation in FPGA Hardware Implementation for Electronic Stethoscope

    PubMed Central

    Chao, Chun-Tang

    2014-01-01

    This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II–EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives. PMID:24790573

  15. A real-time multi-gases detection and concentration measurements based-on time-division multiplexed-lasers

    NASA Astrophysics Data System (ADS)

    Yazdandoust, Fatemeh; Tatenguem Fankem, Hervé; Milde, Tobias; Jimenez, Alvaro; Sacher, Joachim

    2018-02-01

    We report the development of a platform, based-on a Field-Programmable Gate Arrays (FPGAs) and suitable for Time-Division-Multiplexed DFB lasers. The designed platform is subsequently combined with a spectroscopy setup, for detection and quantification of species in a gas mixture. The experimental results show a detection limit of 460 ppm, an uncertainty of 0.1% and a computation time of less than 1000 clock cycles. The proposed system offers a high level of flexibility and is applicable to arbitrary types of gas-mixtures.

  16. Subnanosecond time-to-digital converter implemented in a Kintex-7 FPGA

    NASA Astrophysics Data System (ADS)

    Sano, Y.; Horii, Y.; Ikeno, M.; Sasaki, O.; Tomoto, M.; Uchida, T.

    2017-12-01

    Time-to-digital converters (TDCs) are used in various fields, including high-energy physics. One advantage of implementing TDCs in field-programmable gate arrays (FPGAs) is the flexibility on the modification of the logics, which is useful to cope with the changes in the experimental conditions. Recent FPGAs make it possible to implement TDCs with a time resolution less than 10 ps. On the other hand, various drift chambers require a time resolution of O(0.1) ns, and a simple and easy-to-implement TDC is useful for a robust operation. Herein an eight-channel TDC with a variable bin size down to 0.28 ns is implemented in a Xilinx Kintex-7 FPGA and tested. The TDC is based on a multisampling scheme with quad phase clocks synchronised with an external reference clock. Calibration of the bin size is unnecessary if a stable reference clock is available, which is common in high-energy physics experiments. Depending on the channel, the standard deviation of the differential nonlinearity for a 0.28 ns bin size is 0.13-0.31. The performance has a negligible dependence on the temperature. The power consumption and the potential to extend the number of channels are also discussed.

  17. FPGA-Based Pulse Pile-Up Correction With Energy and Timing Recovery.

    PubMed

    Haselman, M D; Pasko, J; Hauck, S; Lewellen, T K; Miyaoka, R S

    2012-10-01

    Modern field programmable gate arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates well above 100 MHz. This, combined with FPGA's low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for a positron emission tomography (PET) scanner. The University of Washington is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilized to add significant signal processing power to produce higher quality images. In this paper we report on an all-digital pulse pile-up correction algorithm that has been developed for the FPGA. The pile-up mitigation algorithm will allow the scanner to run at higher count rates without incurring large data losses due to the overlapping of scintillation signals. This correction technique utilizes a reference pulse to extract timing and energy information for most pile-up events. Using pulses acquired from a Zecotech Photonics MAPD-N with an LFS-3 scintillator, we show that good timing and energy information can be achieved in the presence of pile-up utilizing a moderate amount of FPGA resources.

  18. On the use of programmable hardware and reduced numerical precision in earth-system modeling.

    PubMed

    Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N

    2015-09-01

    Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.

  19. In-beam experience with a highly granular DAQ and control network: TrbNet

    NASA Astrophysics Data System (ADS)

    Michel, J.; Korcyl, G.; Maier, L.; Traxler, M.

    2013-02-01

    Virtually all Data Acquisition Systems (DAQ) for nuclear and particle physics experiments use a large number of Field Programmable Gate Arrays (FPGAs) for data transport and more complex tasks as pattern recognition and data reduction. All these FPGAs in a large system have to share a common state like a trigger number or an epoch counter to keep the system synchronized for a consistent event/epoch building. Additionally, the collected data has to be transported with high bandwidth, optionally via the ubiquitous Ethernet protocol. Furthermore, the FPGAs' internal states and configuration memories have to be accessed for control and monitoring purposes. Another requirement for a modern DAQ-network is the fault-tolerance for intermittent data errors in the form of automatic retransmission of faulty data. As FPGAs suffer from Single Event Effects when exposed to ionizing particles, the system has to deal with failing FPGAs. The TrbNet protocol was developed taking all these requirements into account. Three virtual channels are merged on one physical medium: The trigger/epoch information is transported with the highest priority. The data channel is second in the priority order, while the control channel is the last. Combined with a small frame size of 80 bit this guarantees a low latency data transport: A system with 100 front-ends can be built with a one-way latency of 2.2 us. The TrbNet-protocol was implemented in each of the 550 FPGAs of the HADES upgrade project and has been successfully used during the Au+Au campaign in April 2012. With 2ṡ106/s Au-ions and 3% interaction ratio the accepted trigger rate is 10 kHz while data is written to storage with 150 MBytes/s. Errors are reliably mitigated via the implemented retransmission of packets and auto-shut-down of individual links. TrbNet was also used for full monitoring of the FEE status. The network stack is written in VHDL and was successfully deployed on various Lattice and Xilinx devices. The TrbNet is also used in other experiments, like systems for detector and electronics development for PANDA and CBM at FAIR. As a platform for such set-ups, e.g. for high-channel time measurement with 15 ps resolution, a generic FPGA platform (TRB3) has been developed.

  20. Creating an Assured Joint DOD and Interagency Interoperable Net-Centric Enterprise. Report of the Defense Science Board Task Force on Achieving Interoperability in a Net-Centric Environment

    DTIC Science & Technology

    2009-03-01

    policy, elliptic curve public key cryptography using the 256 -bit prime modulus elliptic curve as specified in FIPS-186-2 and SHA - 256 are appropriate for...publications/fips/fips186-2/fips186-2-change1.pdf 76 I P ART I . CH A PT E R 5 Hashing via the Secure Hash Algorithm (using SHA - 256 and...lithography and processing techniques. Field programmable gate arrays ( FPGAs ) are a chip design of interest. These devices are extensively used in

  1. Modified Phasemeter for a Heterodyne Laser Interferometer

    NASA Technical Reports Server (NTRS)

    Loya, Frank M.

    2010-01-01

    Modifications have been made in the design of instruments of the type described in "Digital Averaging Phasemeter for Heterodyne Interferometry". A phasemeter of this type measures the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. The phasemeter design lacked immunity to drift of the heterodyne frequency, was bandwidth-limited by computer bus architectures then in use, and was resolution-limited by the nature of field-programmable gate arrays (FPGAs) then available. The modifications have overcome these limitations and have afforded additional improvements in accuracy, speed, and modularity. The modifications are summarized.

  2. High-Precision Pulse Generator

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2011-01-01

    A document discusses a pulse generator with subnanosecond resolution implemented with a low-cost field-programmable gate array (FPGA) at low power levels. The method used exploits the fast carry chains of certain FPGAs. Prototypes have been built and tested in both Actel AX and Xilinx Virtex 4 technologies. In-flight calibration or control can be performed by using a similar and related technique as a time interval measurement circuit by measuring a period of the stable oscillator, as the delays through the fast carry chains will vary as a result of manufacturing variances as well as the result of environmental conditions (voltage, aging, temperature, and radiation).

  3. Implementation of a high precision multi-measurement time-to-digital convertor on a Kintex-7 FPGA

    NASA Astrophysics Data System (ADS)

    Kuang, Jie; Wang, Yonggang; Cao, Qiang; Liu, Chong

    2018-05-01

    Time-to-digital convertors (TDCs) based on field programmable gate array (FPGA) are becoming more and more popular. Multi-measurement is an effective method to improve TDC precision beyond the cell delay limitation. However, the implementation of TDC with multi-measurement on FPGAs manufactured with 28 nm and more advanced process is facing new challenges. Benefiting from the ones-counter encoding scheme, which was developed in our previous work, we implement a ring oscillator multi-measurement TDC on a Xilinx Kintex-7 FPGA. Using the two TDC channels to measure time-intervals in the range (0 ns-30 ns), the average RMS precision can be improved to 5.76 ps, meanwhile the logic resource usage remains the same with the one-measurement TDC, and the TDC dead time is only 22 ns. The investigation demonstrates that the multi-measurement methods are still available for current main-stream FPGAs. Furthermore, the new implementation in this paper could make the trade-off among the time precision, resource usage and TDC dead time better than ever before.

  4. Real-time windowing in imaging radar using FPGA technique

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Escamilla-Hernandez, Enrique

    2005-02-01

    The imaging radar uses the high frequency electromagnetic waves reflected from different objects for estimating of its parameters. Pulse compression is a standard signal processing technique used to minimize the peak transmission power and to maximize SNR, and to get a better resolution. Usually the pulse compression can be achieved using a matched filter. The level of the side-lobes in the imaging radar can be reduced using the special weighting function processing. There are very known different weighting functions: Hamming, Hanning, Blackman, Chebyshev, Blackman-Harris, Kaiser-Bessel, etc., widely used in the signal processing applications. Field Programmable Gate Arrays (FPGAs) offers great benefits like instantaneous implementation, dynamic reconfiguration, design, and field programmability. This reconfiguration makes FPGAs a better solution over custom-made integrated circuits. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal and pulse compression using Matlab, Simulink, and System Generator. Employing FPGA and mentioned software we have proposed the pulse compression design on FPGA using classical and novel windows technique to reduce the side-lobes level. This permits increasing the detection ability of the small or nearly placed targets in imaging radar. The advantage of FPGA that can do parallelism in real time processing permits to realize the proposed algorithms. The paper also presents the experimental results of proposed windowing procedure in the marine radar with such the parameters: signal is linear FM (Chirp); frequency deviation DF is 9.375MHz; the pulse width T is 3.2μs taps number in the matched filter is 800 taps; sampling frequency 253.125*106 MHz. It has been realized the reducing of side-lobes levels in real time permitting better resolution of the small targets.

  5. The Use of Field Programmable Gate Arrays (FPGA) in Small Satellite Communication Systems

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta; Sims, William Herbert; Casas, Joseph

    2015-01-01

    This paper will describe the use of digital Field Programmable Gate Arrays (FPGA) to contribute to advancing the state-of-the-art in software defined radio (SDR) transponder design for the emerging SmallSat and CubeSat industry and to provide advances for NASA as described in the TAO5 Communication and Navigation Roadmap (Ref 4). The use of software defined radios (SDR) has been around for a long time. A typical implementation of the SDR is to use a processor and write software to implement all the functions of filtering, carrier recovery, error correction, framing etc. Even with modern high speed and low power digital signal processors, high speed memories, and efficient coding, the compute intensive nature of digital filters, error correcting and other algorithms is too much for modern processors to get efficient use of the available bandwidth to the ground. By using FPGAs, these compute intensive tasks can be done in parallel, pipelined fashion and more efficiently use every clock cycle to significantly increase throughput while maintaining low power. These methods will implement digital radios with significant data rates in the X and Ka bands. Using these state-of-the-art technologies, unprecedented uplink and downlink capabilities can be achieved in a 1/2 U sized telemetry system. Additionally, modern FPGAs have embedded processing systems, such as ARM cores, integrated inside the FPGA allowing mundane tasks such as parameter commanding to occur easily and flexibly. Potential partners include other NASA centers, industry and the DOD. These assets are associated with small satellite demonstration flights, LEO and deep space applications. MSFC currently has an SDR transponder test-bed using Hardware-in-the-Loop techniques to evaluate and improve SDR technologies.

  6. An IO block array in a radiation-hardened SOI SRAM-based FPGA

    NASA Astrophysics Data System (ADS)

    Yan, Zhao; Lihua, Wu; Xiaowei, Han; Yan, Li; Qianli, Zhang; Liang, Chen; Guoquan, Zhang; Jianzhong, Li; Bo, Yang; Jiantou, Gao; Jian, Wang; Ming, Li; Guizhai, Liu; Feng, Zhang; Xufeng, Guo; Kai, Zhao; Chen, Stanley L.; Fang, Yu; Zhongli, Liu

    2012-01-01

    We present an input/output block (IOB) array used in the radiation-hardened SRAM-based field-programmable gate array (FPGA) VS1000, which is designed and fabricated with a 0.5 μm partially depleted silicon-on-insulator (SOI) logic process at the CETC 58th Institute. Corresponding with the characteristics of the FPGA, each IOB includes a local routing pool and two IO cells composed of a signal path circuit, configurable input/output buffers and an ESD protection network. A boundary-scan path circuit can be used between the programmable buffers and the input/output circuit or as a transparent circuit when the IOB is applied in different modes. Programmable IO buffers can be used at TTL/CMOS standard levels. The local routing pool enhances the flexibility and routability of the connection between the IOB array and the core logic. Radiation-hardened designs, including A-type and H-type body-tied transistors and special D-type registers, improve the anti-radiation performance. The ESD protection network, which provides a high-impulse discharge path on a pad, prevents the breakdown of the core logic caused by the immense current. These design strategies facilitate the design of FPGAs with different capacities or architectures to form a series of FPGAs. The functionality and performance of the IOB array is proved after a functional test. The radiation test indicates that the proposed VS1000 chip with an IOB array has a total dose tolerance of 100 krad(Si), a dose survivability rate of 1.5 × 1011 rad(Si)/s, and a neutron fluence immunity of 1 × 1014 n/cm2.

  7. Image processing applications: From particle physics to society

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  8. CoNNeCT Baseband Processor Module

    NASA Technical Reports Server (NTRS)

    Yamamoto, Clifford K; Jedrey, Thomas C.; Gutrich, Daniel G.; Goodpasture, Richard L.

    2011-01-01

    A document describes the CoNNeCT Baseband Processor Module (BPM) based on an updated processor, memory technology, and field-programmable gate arrays (FPGAs). The BPM was developed from a requirement to provide sufficient computing power and memory storage to conduct experiments for a Software Defined Radio (SDR) to be implemented. The flight SDR uses the AT697 SPARC processor with on-chip data and instruction cache. The non-volatile memory has been increased from a 20-Mbit EEPROM (electrically erasable programmable read only memory) to a 4-Gbit Flash, managed by the RTAX2000 Housekeeper, allowing more programs and FPGA bit-files to be stored. The volatile memory has been increased from a 20-Mbit SRAM (static random access memory) to a 1.25-Gbit SDRAM (synchronous dynamic random access memory), providing additional memory space for more complex operating systems and programs to be executed on the SPARC. All memory is EDAC (error detection and correction) protected, while the SPARC processor implements fault protection via TMR (triple modular redundancy) architecture. Further capability over prior BPM designs includes the addition of a second FPGA to implement features beyond the resources of a single FPGA. Both FPGAs are implemented with Xilinx Virtex-II and are interconnected by a 96-bit bus to facilitate data exchange. Dedicated 1.25- Gbit SDRAMs are wired to each Xilinx FPGA to accommodate high rate data buffering for SDR applications as well as independent SpaceWire interfaces. The RTAX2000 manages scrub and configuration of each Xilinx.

  9. UniBoard: generic hardware for radio astronomy signal processing

    NASA Astrophysics Data System (ADS)

    Hargreaves, J. E.

    2012-09-01

    UniBoard is a generic high-performance computing platform for radio astronomy, developed as a Joint Research Activity in the RadioNet FP7 Programme. The hardware comprises eight Altera Stratix IV Field Programmable Gate Arrays (FPGAs) interconnected by a high speed transceiver mesh. Each FPGA is connected to two DDR3 memory modules and three external 10Gbps ports. In addition, a total of 128 low voltage differential input lines permit connection to external ADC cards. The DSP capability of the board exceeds 644E9 complex multiply-accumulate operations per second. The first production run of eight boards was distributed to partners in The Netherlands, France, Italy, UK, China and Korea in May 2011, with a further production runs completed in December 2011 and early 2012. The function of the board is determined by the firmware loaded into its FPGAs. Current applications include beamformers, correlators, digital receivers, RFI mitigation for pulsar astronomy, and pulsar gating and search machines The new UniBoard based correlator for the European VLBI network (EVN) uses an FX architecture with half the resources of the board devoted to station based processing: delay and phase correction and channelization, and half to the correlation function. A single UniBoard can process a 64MHz band from 32 stations, 2 polarizations, sampled at 8 bit. Adding more UniBoards can expand the total bandwidth of the correlator. The design is able to process both prerecorded and real time (eVLBI) data.

  10. FPGA-based real time processing of the Plenoptic Wavefront Sensor

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, L. F.; Marín, Y.; Díaz, J. J.; Piqueras, J.; García-Jiménez, J.; Rodríguez-Ramos, J. M.

    The plenoptic wavefront sensor combines measurements at pupil and image planes in order to obtain simultaneously wavefront information from different points of view, being capable to sample the volume above the telescope to extract the tomographic information of the atmospheric turbulence. The advantages of this sensor are presented elsewhere at this conference (José M. Rodríguez-Ramos et al). This paper will concentrate in the processing required for pupil plane phase recovery, and its computation in real time using FPGAs (Field Programmable Gate Arrays). This technology eases the implementation of massive parallel processing and allows tailoring the system to the requirements, maintaining flexibility, speed and cost figures.

  11. Applying a Genetic Algorithm to Reconfigurable Hardware

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim

    2004-01-01

    This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less

  13. Assurance of Complex Electronics. What Path Do We Take?

    NASA Technical Reports Server (NTRS)

    Plastow, Richard A.

    2007-01-01

    Many of the methods used to develop software bare a close resemblance to Complex Electronics (CE) development. CE are now programmed to perform tasks that were previously handled in software, such as communication protocols. For instance, Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of "software-like" bugs such as incorrect design, logic, and unexpected interactions within the logic is great. Since CE devices are obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications to develop these devices. By using standardized S/W Engineering methods such as checklists, missing requirements and "bugs" can be detected earlier in the development cycle, thus creating a development process for CE that will be easily maintained and configurable based on the device used.

  14. Digital Device Architecture and the Safe Use of Flash Devices in Munitions

    NASA Technical Reports Server (NTRS)

    Katz, Richard B.; Flowers, David; Bergevin, Keith

    2017-01-01

    Flash technology is being utilized in fuzed munition applications and, based on the development of digital logic devices in the commercial world, usage of flash technology will increase. Digital devices of interest to designers include flash-based microcontrollers and field programmable gate arrays (FPGAs). Almost a decade ago, a study was undertaken to determine if flash-based microcontrollers could be safely used in fuzes and, if so, how should such devices be applied. The results were documented in the Technical Manual for the Use of Logic Devices in Safety Features. This paper will first review the Technical Manual and discuss the rationale behind the suggested architectures for microcontrollers and a brief review of the concern about data retention in flash cells. An architectural feature in the microcontroller under study will be discussed and its use will show how to screen for weak or failed cells during manufacture, storage, or immediately prior to use. As was done for microcontrollers a decade ago, architectures for a flash-based FPGA will be discussed, showing how it can be safely used in fuzes. Additionally, architectures for using non-volatile (including flash-based) storage will be discussed for SRAM-based FPGAs.

  15. A High-Linearity, Ring-Oscillator-Based, Vernier Time-to-Digital Converter Utilizing Carry Chains in FPGAs

    NASA Astrophysics Data System (ADS)

    Cui, Ke; Ren, Zhongjie; Li, Xiangyu; Liu, Zongkai; Zhu, Rihong

    2017-01-01

    Time-to-digital converters (TDCs) using dedicated carry chains of field programmable gate arrays (FPGAs) are usually organized in tapped-delay-line type which are intensively researched in recent years. However this method incurs poor differential nonlinearity (DNL) which arises from the inherent uneven bin granularity. This paper proposes a TDC architecture which utilizes the carry chains in a quite different manner in order to alleviate this long-standing problem. Two independent carry chains working as the delay lines for the fine time interpolation are organized in a ring-oscillator-based Vernier style and the time difference between them is finely adjusted by assigning different number of basic delay cells. A specific design flow is described to obtain the desired delay difference. The TDC was implemented on a Stratix III FPGA. Test results show that the obtained resolution is 31 ps and the DNL\\INL is in the range of (-0.080 LSB, 0.073 LSB)(-0.087 LSB, 0.091 LSB). This demonstrates that the proposed architecture greatly improves linearity compared to previous techniques. Additionally the resource cost is rather low which uses only 319 LUTs and 104 registers per TDC channel.

  16. Replication of Space-Shuttle Computers in FPGAs and ASICs

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.

    2008-01-01

    A document discusses the replication of the functionality of the onboard space-shuttle general-purpose computers (GPCs) in field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The purpose of the replication effort is to enable utilization of proven space-shuttle flight software and software-development facilities to the extent possible during development of software for flight computers for a new generation of launch vehicles derived from the space shuttles. The replication involves specifying the instruction set of the central processing unit and the input/output processor (IOP) of the space-shuttle GPC in a hardware description language (HDL). The HDL is synthesized to form a "core" processor in an FPGA or, less preferably, in an ASIC. The core processor can be used to create a flight-control card to be inserted into a new avionics computer. The IOP of the GPC as implemented in the core processor could be designed to support data-bus protocols other than that of a multiplexer interface adapter (MIA) used in the space shuttle. Hence, a computer containing the core processor could be tailored to communicate via the space-shuttle GPC bus and/or one or more other buses.

  17. L1 track trigger for the CMS HL-LHC upgrade using AM chips and FPGAs

    NASA Astrophysics Data System (ADS)

    Fedi, Giacomo

    2017-08-01

    The increase of luminosity at the HL-LHC will require the introduction of tracker information in CMS's Level-1 trigger system to maintain an acceptable trigger rate when selecting interesting events, despite the order of magnitude increase in minimum bias interactions. To meet the latency requirements, dedicated hardware has to be used. This paper presents the results of tests of a prototype system (pattern recognition ezzanine) as core of pattern recognition and track fitting for the CMS experiment, combining the power of both associative memory custom ASICs and modern Field Programmable Gate Array (FPGA) devices. The mezzanine uses the latest available associative memory devices (AM06) and the most modern Xilinx Ultrascale FPGAs. The results of the test for a complete tower comprising about 0.5 million patterns is presented, using as simulated input events traversing the upgraded CMS detector. The paper shows the performance of the pattern matching, track finding and track fitting, along with the latency and processing time needed. The pT resolution over pT of the muons measured using the reconstruction algorithm is at the order of 1% in the range 3-100 GeV/c.

  18. Sensor Systems Based on FPGAs and Their Applications: A Survey

    PubMed Central

    de la Piedra, Antonio; Braeken, An; Touhafi, Abdellah

    2012-01-01

    In this manuscript, we present a survey of designs and implementations of research sensor nodes that rely on FPGAs, either based upon standalone platforms or as a combination of microcontroller and FPGA. Several current challenges in sensor networks are distinguished and linked to the features of modern FPGAs. As it turns out, low-power optimized FPGAs are able to enhance the computation of several types of algorithms in terms of speed and power consumption in comparison to microcontrollers of commercial sensor nodes. We show that architectures based on the combination of microcontrollers and FPGA can play a key role in the future of sensor networks, in fields where processing capabilities such as strong cryptography, self-testing and data compression, among others, are paramount.

  19. Floating-Point Units and Algorithms for field-programmable gate arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Underwood, Keith D.; Hemmert, K. Scott

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and wemore » are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and constructs the required routes between them. The result is a "bitstream" that is analogous to a compiled binary. The bitstream is loaded into the FPGA to create a specific hardware configuration.« less

  20. Adaptive Instrument Module: Space Instrument Controller "Brain" through Programmable Logic Devices

    NASA Technical Reports Server (NTRS)

    Darrin, Ann Garrison; Conde, Richard; Chern, Bobbie; Luers, Phil; Jurczyk, Steve; Mills, Carl; Day, John H. (Technical Monitor)

    2001-01-01

    The Adaptive Instrument Module (AIM) will be the first true demonstration of reconfigurable computing with field-programmable gate arrays (FPGAs) in space, enabling the 'brain' of the system to evolve or adapt to changing requirements. In partnership with NASA Goddard Space Flight Center and the Australian Cooperative Research Centre for Satellite Systems (CRC-SS), APL has built the flight version to be flown on the Australian university-class satellite FEDSAT. The AIM provides satellites the flexibility to adapt to changing mission requirements by reconfiguring standardized processing hardware rather than incurring the large costs associated with new builds. This ability to reconfigure the processing in response to changing mission needs leads to true evolveable computing, wherein the instrument 'brain' can learn from new science data in order to perform state-of-the-art data processing. The development of the AIM is significant in its enormous potential to reduce total life-cycle costs for future space exploration missions. The advent of RAM-based FPGAs whose configuration can be changed at any time has enabled the development of the AIM for processing tasks that could not be performed in software. The use of the AIM enables reconfiguration of the FPGA circuitry while the spacecraft is in flight, with many accompanying advantages. The AIM demonstrates the practicalities of using reconfigurable computing hardware devices by conducting a series of designed experiments. These include the demonstration of implementing data compression, data filtering, and communication message processing and inter-experiment data computation. The second generation is the Adaptive Processing Template (ADAPT) which is further described in this paper. The next step forward is to make the hardware itself adaptable and the ADAPT pursues this challenge by developing a reconfigurable module that will be capable of functioning efficiently in various applications. ADAPT will take advantage of radiation tolerant RAM-based field programmable gate array (FPGA) technology to develop a reconfigurable processor that combines the flexibility of a general purpose processor running software with the performance of application specific processing hardware for a variety of high performance computing applications.

  1. Single Event Effects Test Results for Advanced Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory R.; Swift, Gary M.

    2006-01-01

    Reconfigurable Field Programmable Gate Arrays (FPGAs) from Altera and Actel and an FPGA-based quick-turnApplication Specific Integrated Circuit (ASIC) from Altera were subjected to single-event testing using heavy ions. Both Altera devices (Stratix II and HardCopy II) exhibited a low latchup threshold (below an LET of 3 MeV-cm2/mg) and thus are not recommended for applications in the space radiation environment. The flash-based Actel ProASIC Plus device did not exhibit latchup to an effective LET of 75 MeV-cm2/mg at room temperature. In addition, these tests did not show flash cell charge loss (upset) or retention damage. Upset characterization of the design-level flip-flops yielded an LET threshold below 10 MeV-cm2/mg and a high LET cross section of about lxlO-6 cm2/bit for storing ones and about lxl0-7 cm2/bit for storing zeros . Thus, the ProASIC device may be suitable for critical flight applications with appropriate triple modular redundancy mitigation techniques.

  2. High-performance computing for airborne applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even thoughmore » the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.« less

  3. Modular design and implementation of field-programmable-gate-array-based Gaussian noise generator

    NASA Astrophysics Data System (ADS)

    Li, Yuan-Ping; Lee, Ta-Sung; Hwang, Jeng-Kuang

    2016-05-01

    The modular design of a Gaussian noise generator (GNG) based on field-programmable gate array (FPGA) technology was studied. A new range reduction architecture was included in a series of elementary function evaluation modules and was integrated into the GNG system. The approximation and quantisation errors for the square root module with a first polynomial approximation were high; therefore, we used the central limit theorem (CLT) to improve the noise quality. This resulted in an output rate of one sample per clock cycle. We subsequently applied Newton's method for the square root module, thus eliminating the need for the use of the CLT because applying the CLT resulted in an output rate of two samples per clock cycle (>200 million samples per second). Two statistical tests confirmed that our GNG is of high quality. Furthermore, the range reduction, which is used to solve a limited interval of the function approximation algorithms of the System Generator platform using Xilinx FPGAs, appeared to have a higher numerical accuracy, was operated at >350 MHz, and can be suitably applied for any function evaluation.

  4. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Robinson, William H.; Rech, Paolo

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  5. Physics of Failure Analysis of Xilinx Flip Chip CCGA Packages: Effects of Mission Environments on Properties of LP2 Underfill and ATI Lid Adhesive Materials

    NASA Technical Reports Server (NTRS)

    Suh, Jong-ook

    2013-01-01

    The Xilinx Virtex 4QV and 5QV (V4 and V5) are next-generation field-programmable gate arrays (FPGAs) for space applications. However, there have been concerns within the space community regarding the non-hermeticity of V4/V5 packages; polymeric materials such as the underfill and lid adhesive will be directly exposed to the space environment. In this study, reliability concerns associated with the non-hermeticity of V4/V5 packages were investigated by studying properties and behavior of the underfill and the lid adhesvie materials used in V4/V5 packages.

  6. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE PAGES

    Quinn, Heather; Robinson, William H.; Rech, Paolo; ...

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  7. A novel productivity-driven logic element for field-programmable devices

    NASA Astrophysics Data System (ADS)

    Marconi, Thomas; Bertels, Koen; Gaydadjiev, Georgi

    2014-06-01

    Although various techniques have been proposed for power reduction in field-programmable devices (FPDs), they are still all based on conventional logic elements (LEs). In the conventional LE, the output of the combinational logic (e.g. the look-up table (LUT) in many field-programmable gate arrays (FPGAs)) is connected to the input of the storage element; while the D flip-flop (DFF) is always clocked even when not necessary. Such unnecessary transitions waste power. To address this problem, we propose a novel productivity-driven LE with reduced number of transitions. The differences between our LE and the conventional LE are in the FFs-type used and the internal LE organisation. In our LEs, DFFs have been replaced by T flip-flops with the T input permanently connected to logic value 1. Instead of connecting the output of the combinational logic to the FF input, we use it as the FF clock. The proposed LE has been validated via Simulation Program with Integrated Circuit Emphasis (SPICE) simulations for a 45-nm Complementary Metal-Oxide-Semiconductor (CMOS) technology as well as via a real Computer-Aided Design (CAD) tools on a real FPGA using the standard Microelectronic Center of North Carolina (MCNC) benchmark circuits. The experimental results show that FPDs using our proposal not only have 48% lower total power but also run 17% faster than conventional FPDs on average.

  8. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    PubMed

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Data acquisition system issues for large experiments

    NASA Astrophysics Data System (ADS)

    Siskind, E. J.

    2007-09-01

    This talk consists of personal observations on two classes of data acquisition ("DAQ") systems for Silicon trackers in large experiments with which the author has been concerned over the last three or more years. The first half is a classic "lessons learned" recital based on experience with the high-level debug and configuration of the DAQ system for the GLAST LAT detector. The second half is concerned with a discussion of the promises and pitfalls of using modern (and future) generations of "system-on-a-chip" ("SOC") or "platform" field-programmable gate arrays ("FPGAs") in future large DAQ systems. The DAQ system pipeline for the 864k channels of Si tracker in the GLAST LAT consists of five tiers of hardware buffers which ultimately feed into the main memory of the (two-active-node) level-3 trigger processor farm. The data formats and buffer volumes of these tiers are briefly described, as well as the flow control employed between successive tiers. Lessons learned regarding data formats, buffer volumes, and flow control/data discard policy are discussed. The continued development of platform FPGAs containing large amounts of configurable logic fabric, embedded PowerPC hard processor cores, digital signal processing components, large volumes of on-chip buffer memory, and multi-gigabit serial I/O capability permits DAQ system designers to vastly increase the amount of data preprocessing that can be performed in parallel within the DAQ pipeline for detector systems in large experiments. The capabilities of some currently available FPGA families are reviewed, along with the prospects for next-generation families of announced, but not yet available, platform FPGAs. Some experience with an actual implementation is presented, and reconciliation between advertised and achievable specifications is attempted. The prospects for applying these components to space-borne Si tracker detectors are briefly discussed.

  10. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    PubMed

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  11. Radiation effects and mitigation strategies for modern FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stettler, M. W.; Caffrey, M. P.; Graham, P. S.

    2004-01-01

    Field Programmable Gate Array devices have become the technology of choice in small volume modern instrumentation and control systems. These devices have always offered significant advantages in flexibility, and recent advances in fabrication have greatly increased logic capacity, substantially increasing the number of applications for this technology. Unfortunately, the increased density (and corresponding shrinkage of process geometry), has made these devices more susceptible to failure due to external radiation. This has been an issue for space based systems for some time, but is now becoming an issue for terrestrial systems in elevated radiation environments and commercial avionics as well. Characterizingmore » the failure modes of Xilinx FPGAs, and developing mitigation strategies is the subject of ongoing research by a consortium of academic, industrial, and governmental laboratories. This paper presents background information of radiation effects and failure modes, as well as current and future mitigation techniques. In particular, the availability of very large FPGA devices, complete with generous amounts of RAM and embedded processor(s), has led to the implementation of complete digital systems on a single device, bringing issues of system reliability and redundancy management to the chip level. Radiation effects on a single FPGA are increasingly likely to have system level consequences, and will need to be addressed in current and future designs.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, Mark Walter

    Sandia National Laboratories is currently developing new processing and data communication architectures for use in future satellite payloads. These architectures will leverage the flexibility and performance of state-of-the-art static-random-access-memory-based Field Programmable Gate Arrays (FPGAs). One such FPGA is the radiation-hardened version of the Virtex-5 being developed by Xilinx. However, not all features of this FPGA are being radiation-hardened by design and could still be susceptible to on-orbit upsets. One such feature is the embedded hard-core PPC440 processor. Since this processor is implemented in the FPGA as a hard-core, traditional mitigation approaches such as Triple Modular Redundancy (TMR) are not availablemore » to improve the processor's on-orbit reliability. The goal of this work is to investigate techniques that can help mitigate the embedded hard-core PPC440 processor within the Virtex-5 FPGA other than TMR. Implementing various mitigation schemes reliably within the PPC440 offers a powerful reconfigurable computing resource to these node-based processing architectures. This document summarizes the work done on the cache mitigation scheme for the embedded hard-core PPC440 processor within the Virtex-5 FPGAs, and describes in detail the design of the cache mitigation scheme and the testing conducted at the radiation effects facility on the Texas A&M campus.« less

  13. OpenPET Hardware, Firmware, Software, and Board Design Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Nimeh, Faisal; Choong, Woon-Sengq; Moses, William W.

    OpenPET is an open source, flexible, high-performance, and modular data acquisition system for a variety of applications. The OpenPET electronics are capable of reading analog voltage or current signals from a wide variety of sensors. The electronics boards make extensive use of field programmable gate arrays (FPGAs) to provide flexibility and scalability. Firmware and software for the FPGAs and computer are used to control and acquire data from the system. The command and control flow is similar to the data flow, however, the commands are initiated from the computer similar to a tree topology (i.e., from top-to-bottom). Each node inmore » the tree discovers its parent and children, and all addresses are configured accordingly. A user (or a script) initiates a command from the computer. This command will be translated and encoded to the corresponding child (e.g., SB, MB, DB, etc.). Consecutively, each node will pass the command to its corresponding child(ren) by looking at the destination address. Finally, once the command reaches its desired destination(s) the corresponding node(s) execute(s) the command and send(s) a reply, if required. All the firmware, software, and the electronics board design files are distributed through the OpenPET website (http://openpet.lbl.gov).« less

  14. Optimized FPGA Implementation of the Thyroid Hormone Secretion Mechanism Using CAD Tools.

    PubMed

    Alghazo, Jaafar M

    2017-02-01

    The goal of this paper is to implement the secretion mechanism of the Thyroid Hormone (TH) based on bio-mathematical differential eqs. (DE) on an FPGA chip. Hardware Descriptive Language (HDL) is used to develop a behavioral model of the mechanism derived from the DE. The Thyroid Hormone secretion mechanism is simulated with the interaction of the related stimulating and inhibiting hormones. Synthesis of the simulation is done with the aid of CAD tools and downloaded on a Field Programmable Gate Arrays (FPGAs) Chip. The chip output shows identical behavior to that of the designed algorithm through simulation. It is concluded that the chip mimics the Thyroid Hormone secretion mechanism. The chip, operating in real-time, is computer-independent stand-alone system.

  15. Filling the Assurance Gap on Complex Electronics

    NASA Technical Reports Server (NTRS)

    Plastow, Richard A.

    2007-01-01

    Many of the methods used to develop software bare a close resemblance to Complex Electronics (CE) development. CE are now programmed to perform tasks that were previously handled by software, such as communication protocols. For example, the James Webb Space Telescope will use Field Programmable Gate Arrays (FPGAs), which can have over a million logic gates, to send telemetry. System-on-chip (SoC) devices, another type of complex electronics, can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of software-like bugs such as incorrect design, logic, and unexpected interactions within the logic is great. Since CE devices are obscuring the hardware/software boundary, mature software methodologies have been proposed, with slight modifications, to develop these devices. By using standardized S/W Engineering methods such as checklists, missing requirements and bugs can be detected earlier in the development cycle, thus creating a development process for CE that can be easily maintained and configurable based on the device used.

  16. Software Process Assurance for Complex Electronics

    NASA Technical Reports Server (NTRS)

    Plastow, Richard A.

    2007-01-01

    Complex Electronics (CE) now perform tasks that were previously handled in software, such as communication protocols. Many methods used to develop software bare a close resemblance to CE development. Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of software-like bugs such as incorrect design, logic, and unexpected interactions within the logic is great. With CE devices obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications in the development of these devices. Software Process Assurance for Complex Electronics (SPACE) is a research project that used standardized S/W Assurance/Engineering practices to provide an assurance framework for development activities. Tools such as checklists, best practices and techniques were used to detect missing requirements and bugs earlier in the development cycle creating a development process for CE that was more easily maintained, consistent and configurable based on the device used.

  17. A Model for Minimizing Numeric Function Generator Complexity and Delay

    DTIC Science & Technology

    2007-12-01

    allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods

  18. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    NASA Astrophysics Data System (ADS)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  19. 3D imaging and wavefront sensing with a plenoptic objective

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  20. Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.« less

  1. NASA Accelerates SpaceCube Technology into Orbit

    NASA Technical Reports Server (NTRS)

    Petrick, David

    2010-01-01

    On May 11, 2009, STS-125 Space Shuttle Atlantis blasted off from Kennedy Space Center on a historic mission to service the Hubble Space Telescope (HST). In addition to sending up the hardware and tools required to repair the observatory, the servicing team at NASA's Goddard Space Flight Center also sent along a complex experimental payload called Relative Navigation Sensors (RNS). The main objective of the RNS payload was to provide real-time image tracking of HST during rendezvous and docking operations. RNS was a complete success, and was brought to life by four Xilinx FPGAs (Field Programmable Gate Arrays) tightly packed into one integrated computer called SpaceCube. SpaceCube is a compact, reconfigurable, multiprocessor computing platform for space applications demanding extreme processing capabilities based on Xilinx Virtex 4 FX60 FPGAs. In a matter of months, the concept quickly went from the white board to a fully funded flight project. The 4-inch by 4-inch SpaceCube processor card was prototyped by a group of Goddard engineers using internal research funding. Once engineers were able to demonstrate the processing power of SpaceCube to NASA, HST management stood behind the product and invested in a flight qualified version, inserting it into the heart of the RNS system. With the determination of putting Xilinx into space, the team strengthened to a small army and delivered a fully functional, space qualified system to the mission.

  2. FPGA applications for single dish activity at Medicina radio telescopes

    NASA Astrophysics Data System (ADS)

    Bartolini, M.; Naldi, G.; Mattana, A.; Maccaferri, A.; De Biaggi, M.

    FPGA technologies are gaining major attention in the recent years in the field of radio astronomy. At Medicina radio telescopes, FPGAs have been used in the last ten years for a number of purposes and in this article we will take into exam the applications developed and installed for the Medicina Single Dish 32m Antenna: these range from high performance digital signal processing to instrument control developed on top of smaller FPGAs.

  3. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of the FPGAs makes it possible to effectively alter the design to some extent to satisfy different requirements without adding hardware. The implementation could be easily propagated to future FPGA generations and/or to custom application-specific integrated circuits.

  4. Software Process Assurance for Complex Electronics (SPACE)

    NASA Technical Reports Server (NTRS)

    Plastow, Richard A.

    2007-01-01

    Complex Electronics (CE) are now programmed to perform tasks that were previously handled in software, such as communication protocols. Many of the methods used to develop software bare a close resemblance to CE development. For instance, Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of software-like bugs such as incorrect design, logic, and unexpected interactions within the logic is great. Since CE devices are obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications in the development of these devices. Software Process Assurance for Complex Electronics (SPACE) is a research project that looks at using standardized S/W Assurance/Engineering practices to provide an assurance framework for development activities. Tools such as checklists, best practices and techniques can be used to detect missing requirements and bugs earlier in the development cycle creating a development process for CE that will be more easily maintained, consistent and configurable based on the device used.

  5. Fault-Tolerant Software-Defined Radio on Manycore

    NASA Technical Reports Server (NTRS)

    Ricketts, Scott

    2015-01-01

    Software-defined radio (SDR) platforms generally rely on field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), but such architectures require significant software development. In addition, application demands for radiation mitigation and fault tolerance exacerbate programming challenges. MaXentric Technologies, LLC, has developed a manycore-based SDR technology that provides 100 times the throughput of conventional radiationhardened general purpose processors. Manycore systems (30-100 cores and beyond) have the potential to provide high processing performance at error rates that are equivalent to current space-deployed uniprocessor systems. MaXentric's innovation is a highly flexible radio, providing over-the-air reconfiguration; adaptability; and uninterrupted, real-time, multimode operation. The technology is also compliant with NASA's Space Telecommunications Radio System (STRS) architecture. In addition to its many uses within NASA communications, the SDR can also serve as a highly programmable research-stage prototyping device for new waveforms and other communications technologies. It can also support noncommunication codes on its multicore processor, collocated with the communications workload-reducing the size, weight, and power of the overall system by aggregating processing jobs to a single board computer.

  6. Imaging photomultiplier array with integrated amplifiers and high-speed USB interfacea)

    NASA Astrophysics Data System (ADS)

    Blacksell, M.; Wach, J.; Anderson, D.; Howard, J.; Collis, S. M.; Blackwell, B. D.; Andruczyk, D.; James, B. W.

    2008-10-01

    Multianode photomultiplier tube (PMT) arrays are finding application as convenient high-speed light sensitive devices for plasma imaging. This paper describes the development of a USB-based "plug-n-play" 16-channel PMT camera with 16bits simultaneous acquisition of 16 signal channels at rates up to 2MS/s per channel. The preamplifiers and digital hardware are packaged in a compact housing which incorporates magnetic shielding, on-board generation of the high-voltage PMT bias, an optical filter mount and slits, and F-mount lens adaptor. Triggering, timing, and acquisition are handled by four field-programmable gate arrays (FPGAs) under instruction from a master FPGA controlled by a computer with a LABVIEW interface. We present technical design details and specifications and illustrate performance with high-speed images obtained on the H-1 heliac at the ANU.

  7. Imaging photomultiplier array with integrated amplifiers and high-speed USB interface.

    PubMed

    Blacksell, M; Wach, J; Anderson, D; Howard, J; Collis, S M; Blackwell, B D; Andruczyk, D; James, B W

    2008-10-01

    Multianode photomultiplier tube (PMT) arrays are finding application as convenient high-speed light sensitive devices for plasma imaging. This paper describes the development of a USB-based "plug-n-play" 16-channel PMT camera with 16 bits simultaneous acquisition of 16 signal channels at rates up to 2 MSs per channel. The preamplifiers and digital hardware are packaged in a compact housing which incorporates magnetic shielding, on-board generation of the high-voltage PMT bias, an optical filter mount and slits, and F-mount lens adaptor. Triggering, timing, and acquisition are handled by four field-programmable gate arrays (FPGAs) under instruction from a master FPGA controlled by a computer with a LABVIEW interface. We present technical design details and specifications and illustrate performance with high-speed images obtained on the H-1 heliac at the ANU.

  8. Statistical Anomalies of Bitflips in SRAMs to Discriminate SBUs From MCUs

    NASA Astrophysics Data System (ADS)

    Clemente, Juan Antonio; Franco, Francisco J.; Villa, Francesca; Baylac, Maud; Rey, Solenne; Mecha, Hortensia; Agapito, Juan A.; Puchner, Helmut; Hubert, Guillaume; Velazco, Raoul

    2016-08-01

    Recently, the occurrence of multiple events in static tests has been investigated by checking the statistical distribution of the difference between the addresses of the words containing bitflips. That method has been successfully applied to Field Programmable Gate Arrays (FPGAs) and the original authors indicate that it is also valid for SRAMs. This paper presents a modified methodology that is based on checking the XORed addresses with bitflips, rather than on the difference. Irradiation tests on CMOS 130 & 90 nm SRAMs with 14-MeV neutrons have been performed to validate this methodology. Results in high-altitude environments are also presented and cross-checked with theoretical predictions. In addition, this methodology has also been used to detect modifications in the organization of said memories. Theoretical predictions have been validated with actual data provided by the manufacturer.

  9. Introduction to the Special Issue on Digital Signal Processing in Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Kocz, J.; Bailes, M.; Greenhill, L. J.

    2016-03-01

    Advances in astronomy are intimately linked to advances in digital signal processing (DSP). This special issue is focused upon advances in DSP within radio astronomy. The trend within that community is to use off-the-shelf digital hardware where possible and leverage advances in high performance computing. In particular, graphics processing units (GPUs) and field programmable gate arrays (FPGAs) are being used in place of application-specific circuits (ASICs); high-speed Ethernet and Infiniband are being used for interconnect in place of custom backplanes. Further, to lower hurdles in digital engineering, communities have designed and released general-purpose FPGA-based DSP systems, such as the CASPER ROACH board, ASTRON Uniboard, and CSIRO Redback board. In this introductory paper, we give a brief historical overview, a summary of recent trends, and provide an outlook on future directions.

  10. Single Event Effects Test Results for the Actel ProASIC Plus and Altera Stratix-II Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory R.; Swift, Gary M.

    2006-01-01

    This work describes radiation testing of Actel's ProASIC Plus and Altera's Stratix-II FPGAs. The Actel Device Under Test (DUT) was a ProASIC Plus APA300-PQ208 nonvolatile, field reprogrammable device which is based on a 0.22micron flash-based LVCMOS technology. Limited investigation has taken place into flash based FPGA technologies, therefore this test served as a preliminary reference point for various SEE behaviors. The Altera DUT was a Stratix-II EP2S60F1020C4. Single Event Upset (SEU) and Single Event Latchup (SEL) were the focus of these studies. For the Actel, a latchup test was done at an effective LET of 75.0 MeV-sq cm/mg at room temperature, and no latchup was detected when irradiated to a total fluence of 1 x 10(exp 7) particles/sq cm. The Altera part was shown to latchup at room temperature.

  11. The Advanced Gamma-ray Imaging System (AGIS): Real Time Stereoscopic Array Trigger

    NASA Astrophysics Data System (ADS)

    Byrum, K.; Anderson, J.; Buckley, J.; Cundiff, T.; Dawson, J.; Drake, G.; Duke, C.; Haberichter, B.; Krawzcynski, H.; Krennrich, F.; Madhavan, A.; Schroedter, M.; Smith, A.

    2009-05-01

    Future large arrays of Imaging Atmospheric Cherenkov telescopes (IACTs) such as AGIS and CTA are conceived to comprise of 50 - 100 individual telescopes each having a camera with 10**3 to 10**4 pixels. To maximize the capabilities of such IACT arrays with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We describe the design of a stereoscopic array trigger that calculates image parameters and then correlates them across a subset of telescopes. Fast Field Programmable Gate Array technology allows to use lookup tables at the array trigger level to form a real-time pattern recognition trigger tht capitalizes on the multiple view points of the shower at different shower core distances. A proof of principle system is currently under construction. It is based on 400 MHz FPGAs and the goal is for camera trigger rates of up to 10 MHz and a tunable cosmic-ray background suppression at the array level.

  12. FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.

    PubMed

    Zierke, Stephanie; Bakos, Jason D

    2010-04-12

    Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).

  13. Reconfigurable Fault Tolerance for FPGAs

    NASA Technical Reports Server (NTRS)

    Shuler, Robert, Jr.

    2010-01-01

    The invention allows a field-programmable gate array (FPGA) or similar device to be efficiently reconfigured in whole or in part to provide higher capacity, non-redundant operation. The redundant device consists of functional units such as adders or multipliers, configuration memory for the functional units, a programmable routing method, configuration memory for the routing method, and various other features such as block RAM, I/O (random access memory, input/output) capability, dedicated carry logic, etc. The redundant device has three identical sets of functional units and routing resources and majority voters that correct errors. The configuration memory may or may not be redundant, depending on need. For example, SRAM-based FPGAs will need some type of radiation-tolerant configuration memory, or they will need triple-redundant configuration memory. Flash or anti-fuse devices will generally not need redundant configuration memory. Some means of loading and verifying the configuration memory is also required. These are all components of the pre-existing redundant FPGA. This innovation modifies the voter to accept a MODE input, which specifies whether ordinary voting is to occur, or if redundancy is to be split. Generally, additional routing resources will also be required to pass data between sections of the device created by splitting the redundancy. In redundancy mode, the voters produce an output corresponding to the two inputs that agree, in the usual fashion. In the split mode, the voters select just one input and convey this to the output, ignoring the other inputs. In a dual-redundant system (as opposed to triple-redundant), instead of a voter, there is some means to latch or gate a state update only when both inputs agree. In this case, the invention would require modification of the latch or gate so that it would operate normally in redundant mode, and would separately latch or gate the inputs in non-redundant mode.

  14. Localized Triple Modular Redundancy vs. Distributed Triple Modular Redundancy on a ProASIC3E Reprogrammable FPGA

    NASA Technical Reports Server (NTRS)

    McGuffey, Alex; Berg, Melanie; Pellish, Jonathan

    2010-01-01

    Field programmable gate arrays (FPGA) are used in every space application. Currently, most space flight applications use radiation hardened (RH) FPGAs, which are very expensive. There is a desire to use cheaper, commercial off the shelf reprogrammable FPGAs, which are more susceptible to radiation effects known as single-event effects (SEE). The RH parts have SEE and total ionizing dose (TID) hardened elements pre-integrated into the part. This means that the designer does not need to implement any hardening techniques while configuring the device. The COTS parts on the other hand must be mitigated by design in order to insure any form of mitigation. The design techniques this project examines concern the use of localized triple modular redundancy (LTMR) and distributed triple modular redundancy (DTMR). LTMR triples every flip flop in the device architecture while DTMR triples everything except for the global routes (clocks, resets, and enables). The testing was performed on a ProASIC3E FPGA at the Texas A&M cyclotron facility. Two design architectures were used: shift registers and counters, both with LTMR and DTMR mitigation techniques. The test results prove that DTMR is more effective at reducing SEE than LTMR. We also determined that there was not a significant difference between the use of shift registers and counters for test purposes. More testing is required to obtain additional linear energy transfer values for each architecture and mitigation technique in order to determine the most cost-effective method of SEE mitigation.

  15. Implementation of a cone-beam backprojection algorithm on the cell broadband engine processor

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Knaup, Michael; Kachelrieß, Marc

    2007-03-01

    Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and due to the high demand put on the memory subsystem. In the past, solving this problem has lead to the implementation of specific architectures, connecting Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to memory through dedicated high speed busses. More recently, there have also been attempt to use Graphic Processing Units (GPUs) to perform the backprojection step. Originally aimed at the gaming market, IBM, Toshiba and Sony have introduced the Cell Broadband Engine (CBE) processor, often considered as a multicomputer on a chip. Clocked at 3 GHz, the Cell allows for a theoretical performance of 192 GFlops and a peak data transfer rate over the internal bus of 200 GB/s. This performance indeed makes the Cell a very attractive architecture for implementing tomographic image reconstruction algorithms. In this study, we investigate the relative performance of a perspective backprojection algorithm when implemented on a standard PC and on the Cell processor. We compare these results to the performance achievable with FPGAs based boards and high end GPUs. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of 1024x1024 pixels into a volume of size 512x512x512 voxels. It took 3.2 minutes on the PC (single CPU) and is as fast as 13.6 seconds on the Cell.

  16. Evaluation of the FIR Example using Xilinx Vivado High-Level Synthesis Compiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Finkel, Hal; Yoshii, Kazutomo

    Compared to central processing units (CPUs) and graphics processing units (GPUs), field programmable gate arrays (FPGAs) have major advantages in reconfigurability and performance achieved per watt. This development flow has been augmented with high-level synthesis (HLS) flow that can convert programs written in a high-level programming language to Hardware Description Language (HDL). Using high-level programming languages such as C, C++, and OpenCL for FPGA-based development could allow software developers, who have little FPGA knowledge, to take advantage of the FPGA-based application acceleration. This improves developer productivity and makes the FPGA-based acceleration accessible to hardware and software developers. Xilinx Vivado HLSmore » compiler is a high-level synthesis tool that enables C, C++ and System C specification to be directly targeted into Xilinx FPGAs without the need to create RTL manually. The white paper [1] published recently by Xilinx uses a finite impulse response (FIR) example to demonstrate the variable-precision features in the Vivado HLS compiler and the resource and power benefits of converting floating point to fixed point for a design. To get a better understanding of variable-precision features in terms of resource usage and performance, this report presents the experimental results of evaluating the FIR example using Vivado HLS 2017.1 and a Kintex Ultrascale FPGA. In addition, we evaluated the half-precision floating-point data type against the double-precision and single-precision data type and present the detailed results.« less

  17. FPGA-Based, Self-Checking, Fault-Tolerant Computers

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2004-01-01

    A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brusati, M.; Camplani, A.; Cannon, M.

    SRAM-ba8ed Field Programmable Gate Array (FPGA) logic devices arc very attractive in applications where high data throughput is needed, such as the latest generation of High Energy Physics (HEP) experiments. FPGAs have been rarely used in such experiments because of their sensitivity to radiation. The present paper proposes a mitigation approach applied to commercial FPGA devices to meet the reliability requirements for the front-end electronics of the Liquid Argon (LAr) electromagnetic calorimeter of the ATLAS experiment, located at CERN. Particular attention will be devoted to define a proper mitigation scheme of the multi-gigabit transceivers embedded in the FPGA, which ismore » a critical part of the LAr data acquisition chain. A demonstrator board is being developed to validate the proposed methodology. :!\\litigation techniques such as Triple Modular Redundancy (T:t\\IR) and scrubbing will be used to increase the robustness of the design and to maximize the fault tolerance from Single-Event Upsets (SEUs).« less

  19. PCI-based WILDFIRE reconfigurable computing engines

    NASA Astrophysics Data System (ADS)

    Fross, Bradley K.; Donaldson, Robert L.; Palmer, Douglas J.

    1996-10-01

    WILDFORCE is the first PCI-based custom reconfigurable computer that is based on the Splash 2 technology transferred from the National Security Agency and the Institute for Defense Analyses, Supercomputing Research Center (SRC). The WILDFORCE architecture has many of the features of the WILDFIRE computer, such as field- programmable gate array (FPGA) based processing elements, linear array and crossbar interconnection, and high- performance memory and I/O subsystems. New features introduced in the PCI-based WILDFIRE systems include memory/processor options that can be added to any processing element. These options include static and dynamic memory, digital signal processors (DSPs), FPGAs, and microprocessors. In addition to memory/processor options, many different application specific connectors can be used to extend the I/O capabilities of the system, including systolic I/O, camera input and video display output. This paper also discusses how this new PCI-based reconfigurable computing engine is used for rapid-prototyping, real-time video processing and other DSP applications.

  20. State-of-the-art in Heterogeneous Computing

    DOE PAGES

    Brodtkorb, Andre R.; Dyken, Christopher; Hagen, Trond R.; ...

    2010-01-01

    Node level heterogeneous architectures have become attractive during the last decade for several reasons: compared to traditional symmetric CPUs, they offer high peak performance and are energy and/or cost efficient. With the increase of fine-grained parallelism in high-performance computing, as well as the introduction of parallelism in workstations, there is an acute need for a good overview and understanding of these architectures. We give an overview of the state-of-the-art in heterogeneous computing, focusing on three commonly found architectures: the Cell Broadband Engine Architecture, graphics processing units (GPUs), and field programmable gate arrays (FPGAs). We present a review of hardware, availablemore » software tools, and an overview of state-of-the-art techniques and algorithms. Furthermore, we present a qualitative and quantitative comparison of the architectures, and give our view on the future of heterogeneous computing.« less

  1. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  2. FPGA-based gating and logic for multichannel single photon counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pooser, Raphael C; Earl, Dennis Duncan; Evans, Philip G

    2012-01-01

    We present results characterizing multichannel InGaAs single photon detectors utilizing gated passive quenching circuits (GPQC), self-differencing techniques, and field programmable gate array (FPGA)-based logic for both diode gating and coincidence counting. Utilizing FPGAs for the diode gating frontend and the logic counting backend has the advantage of low cost compared to custom built logic circuits and current off-the-shelf detector technology. Further, FPGA logic counters have been shown to work well in quantum key distribution (QKD) test beds. Our setup combines multiple independent detector channels in a reconfigurable manner via an FPGA backend and post processing in order to perform coincidencemore » measurements between any two or more detector channels simultaneously. Using this method, states from a multi-photon polarization entangled source are detected and characterized via coincidence counting on the FPGA. Photons detection events are also processed by the quantum information toolkit for application testing (QITKAT)« less

  3. Hardware Architecture Study for NASA's Space Software Defined Radios

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Scardelletti, Maximilian C.; Mortensen, Dale J.; Kacpura, Thomas J.; Andro, Monty; Smith, Carl; Liebetreu, John

    2008-01-01

    This study defines a hardware architecture approach for software defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general purpose processors, digital signal processors, field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) in addition to flexible and tunable radio frequency (RF) front-ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and and interfaces. The modules are a logical division of common radio functions that comprise a typical communication radio. This paper describes the architecture details, module definitions, and the typical functions on each module as well as the module interfaces. Trade-offs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify the internal physical implementation within each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.

  4. Feasibility study for future implantable neural-silicon interface devices.

    PubMed

    Al-Armaghany, Allann; Yu, Bo; Mak, Terrence; Tong, Kin-Fai; Sun, Yihe

    2011-01-01

    The emerging neural-silicon interface devices bridge nerve systems with artificial systems and play a key role in neuro-prostheses and neuro-rehabilitation applications. Integrating neural signal collection, processing and transmission on a single device will make clinical applications more practical and feasible. This paper focuses on the wireless antenna part and real-time neural signal analysis part of implantable brain-machine interface (BMI) devices. We propose to use millimeter-wave for wireless connections between different areas of a brain. Various antenna, including microstrip patch, monopole antenna and substrate integrated waveguide antenna are considered for the intra-cortical proximity communication. A Hebbian eigenfilter based method is proposed for multi-channel neuronal spike sorting. Folding and parallel design techniques are employed to explore various structures and make a trade-off between area and power consumption. Field programmable logic arrays (FPGAs) are used to evaluate various structures.

  5. Development of a front end controller/heap manager for PHENIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, M.N.; Allen, M.D.; Musrock, M.S.

    1996-12-31

    A controller/heap manager has been designed for applicability to all detector subsystem types of PHENIX. the heap manager performs all functions associated with front end electronics control including ADC and analog memory control, data collection, command interpretation and execution, and data packet forming and communication. Interfaces to the unit consist of a timing and control bus, a serial bus, a parallel data bus, and a trigger interface. The topology developed is modular so that many functional blocks are identical for a number of subsystem types. Programmability is maximized through the use of flexible modular functions and implementation using field programmablemore » gate arrays (FPGAs). Details of unit design and functionality will be discussed with particular detail given to subsystems having analog memory-based front end electronics. In addition, mode control, serial functions, and FPGA implementation details will be presented.« less

  6. SAD-Based Stereo Matching Using FPGAs

    NASA Astrophysics Data System (ADS)

    Ambrosch, Kristian; Humenberger, Martin; Kubinger, Wilfried; Steininger, Andreas

    In this chapter we present a field-programmable gate array (FPGA) based stereo matching architecture. This architecture uses the sum of absolute differences (SAD) algorithm and is targeted at automotive and robotics applications. The disparity maps are calculated using 450×375 input images and a disparity range of up to 150 pixels. We discuss two different implementation approaches for the SAD and analyze their resource usage. Furthermore, block sizes ranging from 3×3 up to 11×11 and their impact on the consumed logic elements as well as on the disparity map quality are discussed. The stereo matching architecture enables a frame rate of up to 600 fps by calculating the data in a highly parallel and pipelined fashion. This way, a software solution optimized by using Intel's Open Source Computer Vision Library running on an Intel Pentium 4 with 3 GHz clock frequency is outperformed by a factor of 400.

  7. Upgrading the Digital Electronics of the PEP-II Bunch Current Monitors at the Stanford Linear Accelerator Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kline, Josh; /SLAC

    2006-08-28

    The testing of the upgrade prototype for the bunch current monitors (BCMs) in the PEP-II storage rings at the Stanford Linear Accelerator Center (SLAC) is the topic of this paper. Bunch current monitors are used to measure the charge in the electron/positron bunches traveling in particle storage rings. The BCMs in the PEP-II storage rings need to be upgraded because components of the current system have failed and are known to be failure prone with age, and several of the integrated chips are no longer produced making repairs difficult if not impossible. The main upgrade is replacing twelve old (1995)more » field programmable gate arrays (FPGAs) with a single Virtex II FPGA. The prototype was tested using computer synthesis tools, a commercial signal generator, and a fast pulse generator.« less

  8. Implementation of a Multichannel Serial Data Streaming Algorithm using the Xilinx Serial RapidIO Solution

    NASA Technical Reports Server (NTRS)

    Doxley, Charles A.

    2016-01-01

    In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.

  9. Comparing an FPGA to a Cell for an Image Processing Application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.

    2010-12-01

    Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  10. Graphical Environment Tools for Application to Gamma-Ray Energy Tracking Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, Richard A.; Radford, David C.

    2013-12-30

    Highly segmented, position-sensitive germanium detector systems are being developed for nuclear physics research where traditional electronic signal processing with mixed analog and digital function blocks would be enormously complex and costly. Future systems will be constructed using pipelined processing of high-speed digitized signals as is done in the telecommunications industry. Techniques which provide rapid algorithm and system development for future systems are desirable. This project has used digital signal processing concepts and existing graphical system design tools to develop a set of re-usable modular functions and libraries targeted for the nuclear physics community. Researchers working with complex nuclear detector arraysmore » such as the Gamma-Ray Energy Tracking Array (GRETA) have been able to construct advanced data processing algorithms for implementation in field programmable gate arrays (FPGAs) through application of these library functions using intuitive graphical interfaces.« less

  11. Design Tools for Reconfigurable Hardware in Orbit (RHinO)

    NASA Technical Reports Server (NTRS)

    French, Mathew; Graham, Paul; Wirthlin, Michael; Larchev, Gregory; Bellows, Peter; Schott, Brian

    2004-01-01

    The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using SRAM-based FPGAs.

  12. IOTA: the array controller for a gigapixel OTCCD camera for Pan-STARRS

    NASA Astrophysics Data System (ADS)

    Onaka, Peter; Tonry, John; Luppino, Gerard; Lockhart, Charles; Lee, Aaron; Ching, Gregory; Isani, Sidik; Uyeshiro, Robin

    2004-09-01

    The PanSTARRS project has undertaken an ambitious effort to develop a completely new array controller architecture that is fundamentally driven by the large 1gigapixel, low noise, high speed OTCCD mosaic requirements as well as the size, power and weight restrictions of the PanSTARRS telescope. The result is a very small form factor next generation controller scalar building block with 1 Gigabit Ethernet interfaces that will be assembled into a system that will readout 512 outputs at ~1 Megapixel sample rates on each output. The paper will also discuss critical technology and fabrication techniques such as greater than 1MHz analog to digital converters (ADCs), multiple fast sampling and digital calculation of multiple correlated samples (DMCS), ball grid array (BGA) packaged circuits, LINUX running on embedded field programmable gate arrays (FPGAs) with hard core microprocessors for the prototype currently being developed.

  13. Slow Controls Using the Axiom M5235BCC

    NASA Astrophysics Data System (ADS)

    Hague, Tyler

    2008-10-01

    The Forward Vertex Detector group at PHENIX plans to adopt the Axiom M5235 Business Card Controller for use as slow controls. It is also being evaluated for slow controls on FermiLab e906. This controller features the Freescale MCF5235 microprocessor. It also has three parallel buses, these being the MCU port, BUS port, and enhanced Time Processing Unit (eTPU) port. The BUS port uses a chip select module with three external chip selects to communicate with peripherals. This will be used to communicate with and configure Field Programmable Gate Arrays (FPGAs). The controller also has an Ethernet port which can use several different protocols such as TCP and UDP. This will be used to transfer files with computers on a network. The M5235 Business Card Controller will be placed in a VME crate along with VME card and a Spartan-3 FPGA.

  14. 20-GFLOPS QR processor on a Xilinx Virtex-E FPGA

    NASA Astrophysics Data System (ADS)

    Walke, Richard L.; Smith, Robert W. M.; Lightbody, Gaye

    2000-11-01

    Adaptive beamforming can play an important role in sensor array systems in countering directional interference. In high-sample rate systems, such as radar and comms, the calculation of adaptive weights is a very computational task that requires highly parallel solutions. For systems where low power consumption and volume are important the only viable implementation is as an Application Specific Integrated Circuit (ASIC). However, the rapid advancement of Field Programmable Gate Array (FPGA) technology is enabling highly credible re-programmable solutions. In this paper we present the implementation of a scalable linear array processor for weight calculation using QR decomposition. We employ floating-point arithmetic with mantissa size optimized to the target application to minimize component size, and implement them as relationally placed macros (RPMs) on Xilinx Virtex FPGAs to achieve predictable dense layout and high-speed operation. We present results that show that 20GFLOPS of sustained computation on a single XCV3200E-8 Virtex-E FPGA is possible. We also describe the parameterized implementation of the floating-point operators and QR-processor, and the design methodology that enables us to rapidly generate complex FPGA implementations using the industry standard hardware description language VHDL.

  15. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    NASA Astrophysics Data System (ADS)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  16. MicroShell Minimalist Shell for Xilinx Microprocessors

    NASA Technical Reports Server (NTRS)

    Werne, Thomas A.

    2011-01-01

    MicroShell is a lightweight shell environment for engineers and software developers working with embedded microprocessors in Xilinx FPGAs. (MicroShell has also been successfully ported to run on ARM Cortex-M1 microprocessors in Actel ProASIC3 FPGAs, but without project-integration support.) Micro Shell decreases the time spent performing initial tests of field-programmable gate array (FPGA) designs, simplifies running customizable one-time-only experiments, and provides a familiar-feeling command-line interface. The program comes with a collection of useful functions and enables the designer to add an unlimited number of custom commands, which are callable from the command-line. The commands are parameterizable (using the C-based command-line parameter idiom), so the designer can use one function to exercise hardware with different values. Also, since many hardware peripherals instantiated in FPGAs have reasonably simple register-mapped I/O interfaces, the engineer can edit and view hardware parameter settings at any time without stopping the processor. MicroShell comes with a set of support scripts that interface seamlessly with Xilinx's EDK tool. Adding an instance of MicroShell to a project is as simple as marking a check box in a library configuration dialog box and specifying a software project directory. The support scripts then examine the hardware design, build design-specific functions, conditionally include processor-specific functions, and complete the compilation process. For code-size constrained designs, most of the stock functionality can be excluded from the compiled library. When all of the configurable options are removed from the binary, MicroShell has an unoptimized memory footprint of about 4.8 kB and a size-optimized footprint of about 2.3 kB. Since MicroShell allows unfettered access to all processor-accessible memory locations, it is possible to perform live patching on a running system. This can be useful, for instance, if a bug is discovered in a routine but the system cannot be rebooted: Shell allows a skilled operator to directly edit the binary executable in memory. With some forethought, MicroShell code can be located in a different memory location from custom code, permitting the custom functionality to be overwritten at any time without stopping the controlling shell.

  17. A Nonlinearity Minimization-Oriented Resource-Saving Time-to-Digital Converter Implemented in a 28 nm Xilinx FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2015-10-01

    Because large nonlinearity errors exist in the current tapped-delay line (TDL) style field programmable gate array (FPGA)-based time-to-digital converters (TDC), bin-by-bin calibration techniques have to be resorted for gaining a high measurement resolution. If the TDL in selected FPGAs is significantly affected by changes in ambient temperature, the bin-by-bin calibration table has to be updated as frequently as possible. The on-line calibration and calibration table updating increase the TDC design complexity and limit the system performance to some extent. This paper proposes a method to minimize the nonlinearity errors of TDC bins, so that the bin-by-bin calibration may not be needed while maintaining a reasonably high time resolution. The method is a two pass approach: By a bin realignment, the large number of wasted zero-width bins in the original TDL is reused and the granularity of the bins is improved; by a bin decimation, the bin size and its uniformity is traded-off, and the time interpolation by the delay line turns more precise so that the bin-by-bin calibration is not necessary. Using Xilinx 28 nm FPGAs, in which the TDL property is not very sensitive to ambient temperature, the proposed TDC achieves approximately 15 ps root-mean-square (RMS) time resolution by dual-channel measurements of time-intervals over the range of operating temperature. Because of removing the calibration and less logic resources required for the data post-processing, the method has bigger multi-channel capability.

  18. A FPGA implementation for linearly unmixing a hyperspectral image using OpenCL

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; López, Sebastián.; Sarmiento, Roberto

    2017-10-01

    Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.

  19. VHDL Descriptions for the FPGA Implementation of PWL-Function-Based Multi-Scroll Chaotic Oscillators

    PubMed Central

    2016-01-01

    Nowadays, chaos generators are an attractive field for research and the challenge is their realization for the development of engineering applications. From more than three decades ago, chaotic oscillators have been designed using discrete electronic devices, very few with integrated circuit technology, and in this work we propose the use of field-programmable gate arrays (FPGAs) for fast prototyping. FPGA-based applications require that one be expert on programming with very-high-speed integrated circuits hardware description language (VHDL). In this manner, we detail the VHDL descriptions of chaos generators for fast prototyping from high-level programming using Python. The cases of study are three kinds of chaos generators based on piecewise-linear (PWL) functions that can be systematically augmented to generate even and odd number of scrolls. We introduce new algorithms for the VHDL description of PWL functions like saturated functions series, negative slopes and sawtooth. The generated VHDL-code is portable, reusable and open source to be synthesized in an FPGA. Finally, we show experimental results for observing 2, 10 and 30-scroll attractors. PMID:27997930

  20. VHDL Descriptions for the FPGA Implementation of PWL-Function-Based Multi-Scroll Chaotic Oscillators.

    PubMed

    Tlelo-Cuautle, Esteban; Quintas-Valles, Antonio de Jesus; de la Fraga, Luis Gerardo; Rangel-Magdaleno, Jose de Jesus

    2016-01-01

    Nowadays, chaos generators are an attractive field for research and the challenge is their realization for the development of engineering applications. From more than three decades ago, chaotic oscillators have been designed using discrete electronic devices, very few with integrated circuit technology, and in this work we propose the use of field-programmable gate arrays (FPGAs) for fast prototyping. FPGA-based applications require that one be expert on programming with very-high-speed integrated circuits hardware description language (VHDL). In this manner, we detail the VHDL descriptions of chaos generators for fast prototyping from high-level programming using Python. The cases of study are three kinds of chaos generators based on piecewise-linear (PWL) functions that can be systematically augmented to generate even and odd number of scrolls. We introduce new algorithms for the VHDL description of PWL functions like saturated functions series, negative slopes and sawtooth. The generated VHDL-code is portable, reusable and open source to be synthesized in an FPGA. Finally, we show experimental results for observing 2, 10 and 30-scroll attractors.

  1. The Advanced Gamma-ray Imaging System (AGIS): Topological Array Trigger

    NASA Astrophysics Data System (ADS)

    Smith, Andrew W.

    2010-03-01

    AGIS is a concept for the next-generation ground-based gamma-ray observatory. It will be an array of 36 imaging atmospheric Cherenkov telescopes (IACTs) sensitive in the energy range from 50 GeV to 200 TeV. The required improvements in sensitivity, angular resolution, and reliability of operation relative to the present generation instruments imposes demanding technological and cost requirements on the design of the telescopes and on the triggering and readout systems for AGIS. To maximize the capabilities of large arrays of IACTs with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We outline the status of the development of a stereoscopic array trigger that calculates image parameters and correlates them across a subset of telescopes. Field Programmable Gate Arrays (FPGAs) implement the real-time pattern recognition to suppress cosmic rays and night-sky background events. A proof of principle system is being developed to run at camera trigger rates up to 10MHz and array-level rates up to 10kHz.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandes, Ana; Pereira, Rita C.; Sousa, Jorge

    The Instituto de Plasmas e Fusao Nuclear (IPFN) has developed dedicated re-configurable modules based on field programmable gate array (FPGA) devices for several nuclear fusion machines worldwide. Moreover, new Advanced Telecommunication Computing Architecture (ATCA) based modules developed by IPFN are already included in the ITER catalogue. One of the requirements for re-configurable modules operating in future nuclear environments including ITER is the remote update capability. Accordingly, this work presents an alternative method for FPGA remote programing to be implemented in new ATCA based re-configurable modules. FPGAs are volatile devices and their programming code is usually stored in dedicated flash memoriesmore » for properly configuration during module power-on. The presented method is capable to store new FPGA codes in Serial Peripheral Interface (SPI) flash memories using the PCIexpress (PCIe) network established on the ATCA back-plane, linking data acquisition endpoints and the data switch blades. The method is based on the Xilinx Quick Boot application note, adapted to PCIe protocol and ATCA based modules. (authors)« less

  3. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  4. Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors

    NASA Technical Reports Server (NTRS)

    Flatley, Thomas P.

    2015-01-01

    SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.

  5. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. Current standards were researched and new standard interfaces were proposed. The implementation of the proposed standard interfaces on a laboratory breadboard SDR will be presented.

  6. A New Arbiter PUF for Enhancing Unpredictability on FPGA

    PubMed Central

    Machida, Takanori; Yamamoto, Dai; Iwamoto, Mitsugu; Sakiyama, Kazuo

    2015-01-01

    In general, conventional Arbiter-based Physically Unclonable Functions (PUFs) generate responses with low unpredictability. The N-XOR Arbiter PUF, proposed in 2007, is a well-known technique for improving this unpredictability. In this paper, we propose a novel design for Arbiter PUF, called Double Arbiter PUF, to enhance the unpredictability on field programmable gate arrays (FPGAs), and we compare our design to conventional N-XOR Arbiter PUFs. One metric for judging the unpredictability of responses is to measure their tolerance to machine-learning attacks. Although our previous work showed the superiority of Double Arbiter PUFs regarding unpredictability, its details were not clarified. We evaluate the dependency on the number of training samples for machine learning, and we discuss the reason why Double Arbiter PUFs are more tolerant than the N-XOR Arbiter PUFs by evaluating intrachip variation. Further, the conventional Arbiter PUFs and proposed Double Arbiter PUFs are evaluated according to other metrics, namely, their uniqueness, randomness, and steadiness. We demonstrate that 3-1 Double Arbiter PUF archives the best performance overall. PMID:26491720

  7. Mitigated FPGA design of multi-gigabit transceivers for application in high radiation environments of High Energy Physics experiments

    DOE PAGES

    Brusati, M.; Camplani, A.; Cannon, M.; ...

    2017-02-20

    SRAM-ba8ed Field Programmable Gate Array (FPGA) logic devices arc very attractive in applications where high data throughput is needed, such as the latest generation of High Energy Physics (HEP) experiments. FPGAs have been rarely used in such experiments because of their sensitivity to radiation. The present paper proposes a mitigation approach applied to commercial FPGA devices to meet the reliability requirements for the front-end electronics of the Liquid Argon (LAr) electromagnetic calorimeter of the ATLAS experiment, located at CERN. Particular attention will be devoted to define a proper mitigation scheme of the multi-gigabit transceivers embedded in the FPGA, which ismore » a critical part of the LAr data acquisition chain. A demonstrator board is being developed to validate the proposed methodology. :!\\litigation techniques such as Triple Modular Redundancy (T:t\\IR) and scrubbing will be used to increase the robustness of the design and to maximize the fault tolerance from Single-Event Upsets (SEUs).« less

  8. A Plug and Play GNC Architecture Using FPGA Components

    NASA Technical Reports Server (NTRS)

    KrishnaKumar, K.; Kaneshige, J.; Waterman, R.; Pires, C.; Ippoloito, C.

    2005-01-01

    The goal of Plug and Play, or PnP, is to allow hardware and software components to work together automatically, without requiring manual setup procedures. As a result, new or replacement hardware can be plugged into a system and automatically configured with the appropriate resource assignments. However, in many cases it may not be practical or even feasible to physically replace hardware components. One method for handling these types of situations is through the incorporation of reconfigurable hardware such as Field Programmable Gate Arrays, or FPGAs. This paper describes a phased approach to developing a Guidance, Navigation, and Control (GNC) architecture that expands on the traditional concepts of PnP, in order to accommodate hardware reconfiguration without requiring detailed knowledge of the hardware. This is achieved by establishing a functional based interface that defines how the hardware will operate, and allow the hardware to reconfigure itself. The resulting system combines the flexibility of manipulating software components with the speed and efficiency of hardware.

  9. FPGA Implementation of the Coupled Filtering Method and the Affine Warping Method.

    PubMed

    Zhang, Chen; Liang, Tianzhu; Mok, Philip K T; Yu, Weichuan

    2017-07-01

    In ultrasound image analysis, the speckle tracking methods are widely applied to study the elasticity of body tissue. However, "feature-motion decorrelation" still remains as a challenge for the speckle tracking methods. Recently, a coupled filtering method and an affine warping method were proposed to accurately estimate strain values, when the tissue deformation is large. The major drawback of these methods is the high computational complexity. Even the graphics processing unit (GPU)-based program requires a long time to finish the analysis. In this paper, we propose field-programmable gate array (FPGA)-based implementations of both methods for further acceleration. The capability of FPGAs on handling different image processing components in these methods is discussed. A fast and memory-saving image warping approach is proposed. The algorithms are reformulated to build a highly efficient pipeline on FPGA. The final implementations on a Xilinx Virtex-7 FPGA are at least 13 times faster than the GPU implementation on the NVIDIA graphic card (GeForce GTX 580).

  10. Present and Future Applications of Digital Electronics in Nuclear Science - a Commercial Prospective

    NASA Astrophysics Data System (ADS)

    Tan, Hui

    2011-10-01

    Digital readout electronics instrumenting radiation detectors have experienced significant advancements in the last decade or so. This on one hand can be attributed to the steady improvements in commercial digital processing components such as analog-to-digital converters (ADCs), digital-to-analog converters (DACs), field-programmable-gate-arrays (FPGAs), and digital-signal-processors (DSPs), and on the other hand can also be attributed to the increasing needs for improved time, position, and energy resolution in nuclear physics experiments, which have spurred the rapid development of commercial off-the-shelf high speed, high resolution digitizers or spectrometers. Absent from conventional analog electronics, the capability to record fast decaying pulses from radiation detectors in digital readout electronics has profoundly benefited nuclear physics researchers since they now can perform detailed pulse processing for applications such as gamma-ray tracking and decay-event selection and reconstruction. In this talk, present state-of-the-art digital readout electronics and its applications in a variety of nuclear science fields will be discussed, and future directions in hardware development for digital electronics will also be outlined, all from the prospective of a commercial manufacturer of digital electronics.

  11. High-Performance, Radiation-Hardened Electronics for Space Environments

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Watson, Michael D.; Frazier, Donald O.; Adams, James H.; Johnson, Michael A.; Kolawa, Elizabeth A.

    2007-01-01

    The Radiation Hardened Electronics for Space Environments (RHESE) project endeavors to advance the current state-of-the-art in high-performance, radiation-hardened electronics and processors, ensuring successful performance of space systems required to operate within extreme radiation and temperature environments. Because RHESE is a project within the Exploration Technology Development Program (ETDP), RHESE's primary customers will be the human and robotic missions being developed by NASA's Exploration Systems Mission Directorate (ESMD) in partial fulfillment of the Vision for Space Exploration. Benefits are also anticipated for NASA's science missions to planetary and deep-space destinations. As a technology development effort, RHESE provides a broad-scoped, full spectrum of approaches to environmentally harden space electronics, including new materials, advanced design processes, reconfigurable hardware techniques, and software modeling of the radiation environment. The RHESE sub-project tasks are: SelfReconfigurable Electronics for Extreme Environments, Radiation Effects Predictive Modeling, Radiation Hardened Memory, Single Event Effects (SEE) Immune Reconfigurable Field Programmable Gate Array (FPGA) (SIRF), Radiation Hardening by Software, Radiation Hardened High Performance Processors (HPP), Reconfigurable Computing, Low Temperature Tolerant MEMS by Design, and Silicon-Germanium (SiGe) Integrated Electronics for Extreme Environments. These nine sub-project tasks are managed by technical leads as located across five different NASA field centers, including Ames Research Center, Goddard Space Flight Center, the Jet Propulsion Laboratory, Langley Research Center, and Marshall Space Flight Center. The overall RHESE integrated project management responsibility resides with NASA's Marshall Space Flight Center (MSFC). Initial technology development emphasis within RHESE focuses on the hardening of Field Programmable Gate Arrays (FPGA)s and Field Programmable Analog Arrays (FPAA)s for use in reconfigurable architectures. As these component/chip level technologies mature, the RHESE project emphasis shifts to focus on efforts encompassing total processor hardening techniques and board-level electronic reconfiguration techniques featuring spare and interface modularity. This phased approach to distributing emphasis between technology developments provides hardened FPGA/FPAAs for early mission infusion, then migrates to hardened, board-level, high speed processors with associated memory elements and high density storage for the longer duration missions encountered for Lunar Outpost and Mars Exploration occurring later in the Constellation schedule.

  12. Use of Field Programmable Gate Array Technology in Future Space Avionics

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.; Tate, Robert

    2005-01-01

    Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.

  13. Digital Front End for Wide-Band VLBI Science Receiver

    NASA Technical Reports Server (NTRS)

    Jongeling, Andre; Sigman, Elliott; Navarro, Robert; Goodhart, Charles; Rogstad, Steve; Chandra, Kumar; Finley, Sue; Trinh, Joseph; Soriano, Melissa; White, Les; hide

    2006-01-01

    An upgrade to the very-long-baseline-interferometry (VLBI) science receiver (VSR) a radio receiver used in NASA's Deep Space Network (DSN) is currently being implemented. The current VSR samples standard DSN intermediate- frequency (IF) signals at 256 MHz and after digital down-conversion records data from up to four 16-MHz baseband channels. Currently, IF signals are limited to the 265-to-375-MHz range, and recording rates are limited to less than 80 Mbps. The new digital front end, denoted the Wideband VSR, provides improvements to enable the receiver to process wider bandwidth signals and accommodate more data channels for recording. The Wideband VSR utilizes state-of-the-art commercial analog-to-digital converter and field-programmable gate array (FPGA) integrated circuits, and fiber-optic connections in a custom architecture. It accepts IF signals from 100 to 600 MHz, sampling the signal at 1.28 GHz. The sample data are sent to a digital processing module, using a fiber-optic link for isolation. The digital processing module includes boards designed around an Advanced Telecom Computing Architecture (ATCA) industry-standard backplane. Digital signal processing implemented in FPGAs down-convert the data signals in up to 16 baseband channels with programmable bandwidths from 1 kHz to 16 MHz. Baseband samples are transmitted to a computer via multiple Ethernet connections allowing recording to disk at rates of up to 1 Gbps.

  14. Design of time interval generator based on hybrid counting method

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  15. Fast semivariogram computation using FPGA architectures

    NASA Astrophysics Data System (ADS)

    Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang

    2015-02-01

    The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments. Computational speedup is measured with respect to Matlab implementation on a personal computer with an Intel i7 multi-core processor. Preliminary simulation results indicate that a significant advantage in speed can be attained by the architectures, making the algorithm viable for implementation in medical devices

  16. Super-Resolution in Plenoptic Cameras Using FPGAs

    PubMed Central

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-01-01

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246

  17. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph; Mortensen, Dale

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. The extension of STRS to the SSP hardware will promote easier waveform reconfiguration and reuse. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. A FPGA-based transmit waveform implementation of the proposed standard interfaces on a laboratory breadboard SDR will be discussed.

  18. Design and Testing of Space Telemetry SCA Waveform

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale J.; Handler, Louis M.; Quinn, Todd M.

    2006-01-01

    A Software Communications Architecture (SCA) Waveform for space telemetry is being developed at the NASA Glenn Research Center (GRC). The space telemetry waveform is implemented in a laboratory testbed consisting of general purpose processors, field programmable gate arrays (FPGAs), analog-to-digital converters (ADCs), and digital-to-analog converters (DACs). The radio hardware is integrated with an SCA Core Framework and other software development tools. The waveform design is described from both the bottom-up signal processing and top-down software component perspectives. Simulations and model-based design techniques used for signal processing subsystems are presented. Testing with legacy hardware-based modems verifies proper design implementation and dynamic waveform operations. The waveform development is part of an effort by NASA to define an open architecture for space based reconfigurable transceivers. Use of the SCA as a reference has increased understanding of software defined radio architectures. However, since space requirements put a premium on size, mass, and power, the SCA may be impractical for today s space ready technology. Specific requirements for an SCA waveform and other lessons learned from this development are discussed.

  19. A Test Methodology for Determining Space-Readiness of Xilinx SRAM-Based FPGA Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Graham, Paul S; Morgan, Keith S

    2008-01-01

    Using reconfigurable, static random-access memory (SRAM) based field-programmable gate arrays (FPGAs) for space-based computation has been an exciting area of research for the past decade. Since both the circuit and the circuit's state is stored in radiation-tolerant memory, both could be alterd by the harsh space radiation environment. Both the circuit and the circuit's state can be prote cted by triple-moduler redundancy (TMR), but applying TMR to FPGA user designs is often an error-prone process. Faulty application of TMR could cause the FPGA user circuit to output incorrect data. This paper will describe a three-tiered methodology for testing FPGA usermore » designs for space-readiness. We will describe the standard approach to testing FPGA user designs using a particle accelerator, as well as two methods using fault injection and a modeling tool. While accelerator testing is the current 'gold standard' for pre-launch testing, we believe the use of fault injection and modeling tools allows for easy, cheap and uniform access for discovering errors early in the design process.« less

  20. Intelligent FPGA Data Acquisition Framework

    NASA Astrophysics Data System (ADS)

    Bai, Yunpeng; Gaisbauer, Dominic; Huber, Stefan; Konorov, Igor; Levit, Dmytro; Steffen, Dominik; Paul, Stephan

    2017-06-01

    In this paper, we present the field programmable gate arrays (FPGA)-based framework intelligent FPGA data acquisition (IFDAQ), which is used for the development of DAQ systems for detectors in high-energy physics. The framework supports Xilinx FPGA and provides a collection of IP cores written in very high speed integrated circuit hardware description language, which use the common interconnect interface. The IP core library offers functionality required for the development of the full DAQ chain. The library consists of Serializer/Deserializer (SERDES)-based time-to-digital conversion channels, an interface to a multichannel 80-MS/s 10-b analog-digital conversion, data transmission, and synchronization protocol between FPGAs, event builder, and slow control. The functionality is distributed among FPGA modules built in the AMC form factor: front end and data concentrator. This modular design also helps to scale and adapt the DAQ system to the needs of the particular experiment. The first application of the IFDAQ framework is the upgrade of the read-out electronics for the drift chambers and the electromagnetic calorimeters (ECALs) of the COMPASS experiment at CERN. The framework will be presented and discussed in the context of this paper.

  1. Super-resolution in plenoptic cameras using FPGAs.

    PubMed

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  2. The New Meteor Radar at Penn State: Design and First Observations

    NASA Technical Reports Server (NTRS)

    Urbina, J.; Seal, R.; Dyrud, L.

    2011-01-01

    In an effort to provide new and improved meteor radar sensing capabilities, Penn State has been developing advanced instruments and technologies for future meteor radars, with primary objectives of making such instruments more capable and more cost effective in order to study the basic properties of the global meteor flux, such as average mass, velocity, and chemical composition. Using low-cost field programmable gate arrays (FPGAs), combined with open source software tools, we describe a design methodology enabling one to develop state-of-the art radar instrumentation, by developing a generalized instrumentation core that can be customized using specialized output stage hardware. Furthermore, using object-oriented programming (OOP) techniques and open-source tools, we illustrate a technique to provide a cost-effective, generalized software framework to uniquely define an instrument s functionality through a customizable interface, implemented by the designer. The new instrument is intended to provide instantaneous profiles of atmospheric parameters and climatology on a daily basis throughout the year. An overview of the instrument design concepts and some of the emerging technologies developed for this meteor radar are presented.

  3. Embedded Implementation of VHR Satellite Image Segmentation

    PubMed Central

    Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan

    2016-01-01

    Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370

  4. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.

    PubMed

    Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre

    2017-06-01

    We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.

  5. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units.

    PubMed

    Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  6. Environmental Effects on Data Retention in Flash Cells

    NASA Technical Reports Server (NTRS)

    Katz, Rich; Flowers, David; Bergevin, Keith

    2017-01-01

    Flash technology is being utilized in fuzed munition applications and, based on the development of digital logic devices in the commercial world, usage of flash technology will increase. Antifuse technology, prevalent in non-volatile field programmable gate arrays (FPGAs), will eventually be phased out as new devices have not been developed for approximately a decade. The reliance on flash technology presents a long-term reliability issue for both DoD and NASA safety- and mission-critical applications. A thorough understanding of the data retention failure modes and statistics associated with Flash data retention is of vital concern to the fuze safety community. A key retention parameter for a flash cell is the threshold voltage (VTH), which is an indirect indicator of the amount of charge stored on the cells floating gate. This paper will present the results of our on-going tests: long-term storage at 150 C for a small population of devices, neutron radiation exposure, electrostatic discharge (ESD) testing, and the trends of large populations (over 300 devices for each condition) exposed to three difference temperatures: 25 C, 125 C, and 150 C.

  7. Secure TRNG with random phase stimulation

    NASA Astrophysics Data System (ADS)

    Wieczorek, Piotr Z.

    2017-08-01

    In this paper a novel TRNG concept is proposed which is a vital part of cryptographic systems. The proposed TRNG involves phase variability of a pair of ring oscillators (ROs) to force the multiple metastable events in a flip-flop (FF). In the solution, the ROs are periodically activated to ensure the violation of the FF timing and resultant state randomness, while the TRNG circuit adapts the structure of ROs to obtain the maximum entropy and circuit security. The TRNG can be implemented in inexpensive re-programmable devices (CPLDs or FPGAs) without the use of Digital Clock Managers (DCMs). Preliminary test results proved the circuit's immunity to the intentional frequency injection attacks.

  8. Rapid prototyping of update algorithm of discrete Fourier transform for real-time signal processing

    NASA Astrophysics Data System (ADS)

    Kakad, Yogendra P.; Sherlock, Barry G.; Chatapuram, Krishnan V.; Bishop, Stephen

    2001-10-01

    An algorithm is developed in the companion paper, to update the existing DFT to represent the new data series that results when a new signal point is received. Updating the DFT in this way uses less computation than directly evaluating the DFT using the FFT algorithm, This reduces the computational order by a factor of log2 N. The algorithm is able to work in the presence of data window function, for use with rectangular window, the split triangular, Hanning, Hamming, and Blackman windows. In this paper, a hardware implementation of this algorithm, using FPGA technology, is outlined. Unlike traditional fully customized VLSI circuits, FPGAs represent a technical break through in the corresponding industry. The FPGA implements thousands of gates of logic in a single IC chip and it can be programmed by users at their site in a few seconds or less depending on the type of device used. The risk is low and the development time is short. The advantages have made FPGAs very popular for rapid prototyping of algorithms in the area of digital communication, digital signal processing, and image processing. Our paper addresses the related issues of implementation using hardware descriptive language in the development of the design and the subsequent downloading on the programmable hardware chip.

  9. Fault-Tolerant, Radiation-Hard DSP

    NASA Technical Reports Server (NTRS)

    Czajkowski, David

    2011-01-01

    Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high-performance signal processing include significant increase in onboard science data processing, enabling orders of magnitude reduction in required communication bandwidth for science data return, orders of magnitude improvement in onboard mission planning and critical decision making, and the ability to rapidly respond to changing mission environments, thus enabling opportunistic science and orders of magnitude reduction in the cost of mission operations through reduction of required staff. Additional benefits of COTS-based, high-performance signal processing include the ability to leverage considerable commercial and academic investments in advanced computing tools, techniques, and infra structure, and the familiarity of the science and IT community with these computing environments.

  10. FPGA architecture and implementation of sparse matrix vector multiplication for the finite element method

    NASA Astrophysics Data System (ADS)

    Elkurdi, Yousef; Fernández, David; Souleimanov, Evgueni; Giannacopoulos, Dennis; Gross, Warren J.

    2008-04-01

    The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. The trends in floating-point performance are moving in favor of Field-Programmable Gate Arrays (FPGAs), hence increasing interest has grown in the scientific community to exploit this technology. We present an architecture and implementation of an FPGA-based sparse matrix-vector multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. FEM matrices display specific sparsity patterns that can be exploited to improve the efficiency of hardware designs. Our architecture exploits FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements by relying on external SDRAM for data storage while utilizing the FPGAs computational resources in a stream-through systolic approach. The architecture is based on a pipelined linear array of processing elements (PEs) coupled with a hardware-oriented matrix striping algorithm and a partitioning scheme which enables it to process arbitrarily big matrices without changing the number of PEs in the architecture. Therefore, this architecture is only limited by the amount of external RAM available to the FPGA. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA systems, this architecture can achieve 1.5 GFLOPS sustained performance. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solution techniques such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.

  11. Digital Radar-Signal Processors Implemented in FPGAs

    NASA Technical Reports Server (NTRS)

    Berkun, Andrew; Andraka, Ray

    2004-01-01

    High-performance digital electronic circuits for onboard processing of return signals in an airborne precipitation- measuring radar system have been implemented in commercially available field-programmable gate arrays (FPGAs). Previously, it was standard practice to downlink the radar-return data to a ground station for postprocessing a costly practice that prevents the nearly-real-time use of the data for automated targeting. In principle, the onboard processing could be performed by a system of about 20 personal- computer-type microprocessors; relative to such a system, the present FPGA-based processor is much smaller and consumes much less power. Alternatively, the onboard processing could be performed by an application-specific integrated circuit (ASIC), but in comparison with an ASIC implementation, the present FPGA implementation offers the advantages of (1) greater flexibility for research applications like the present one and (2) lower cost in the small production volumes typical of research applications. The generation and processing of signals in the airborne precipitation measuring radar system in question involves the following especially notable steps: The system utilizes a total of four channels two carrier frequencies and two polarizations at each frequency. The system uses pulse compression: that is, the transmitted pulse is spread out in time and the received echo of the pulse is processed with a matched filter to despread it. The return signal is band-limited and digitally demodulated to a complex baseband signal that, for each pulse, comprises a large number of samples. Each complex pair of samples (denoted a range gate in radar terminology) is associated with a numerical index that corresponds to a specific time offset from the beginning of the radar pulse, so that each such pair represents the energy reflected from a specific range. This energy and the average echo power are computed. The phase of each range bin is compared to the previous echo by complex conjugate multiplication to obtain the mean Doppler shift (and hence the mean and variance of the velocity of precipitation) of the echo at that range.

  12. Energy efficiency analysis and implementation of AES on an FPGA

    NASA Astrophysics Data System (ADS)

    Kenney, David

    The Advanced Encryption Standard (AES) was developed by Joan Daemen and Vincent Rjimen and endorsed by the National Institute of Standards and Technology in 2001. It was designed to replace the aging Data Encryption Standard (DES) and be useful for a wide range of applications with varying throughput, area, power dissipation and energy consumption requirements. Field Programmable Gate Arrays (FPGAs) are flexible and reconfigurable integrated circuits that are useful for many different applications including the implementation of AES. Though they are highly flexible, FPGAs are often less efficient than Application Specific Integrated Circuits (ASICs); they tend to operate slower, take up more space and dissipate more power. There have been many FPGA AES implementations that focus on obtaining high throughput or low area usage, but very little research done in the area of low power or energy efficient FPGA based AES; in fact, it is rare for estimates on power dissipation to be made at all. This thesis presents a methodology to evaluate the energy efficiency of FPGA based AES designs and proposes a novel FPGA AES implementation which is highly flexible and energy efficient. The proposed methodology is implemented as part of a novel scripting tool, the AES Energy Analyzer, which is able to fully characterize the power dissipation and energy efficiency of FPGA based AES designs. Additionally, this thesis introduces a new FPGA power reduction technique called Opportunistic Combinational Operand Gating (OCOG) which is used in the proposed energy efficient implementation. The AES Energy Analyzer was able to estimate the power dissipation and energy efficiency of the proposed AES design during its most commonly performed operations. It was found that the proposed implementation consumes less energy per operation than any previous FPGA based AES implementations that included power estimations. Finally, the use of Opportunistic Combinational Operand Gating on an AES cipher was found to reduce its dynamic power consumption by up to 17% when compared to an identical design that did not employ the technique.

  13. Evaluation of CHO Benchmarks on the Arria 10 FPGA using Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. Benchmarking of OpenCL-based framework is an effective way for analyzing the performance of system by studying the execution of the benchmark applications. CHO is a suite of benchmark applications that provides support for OpenCL [1]. The authors presented CHO as an OpenCL port of the CHStone benchmark. Using Altera OpenCL (AOCL) compiler to synthesize the benchmark applications, they listed the resource usage and performance of each kernel that can be successfully synthesized by the compiler. In this report, we evaluate the resource usage and performance of the CHO benchmark applications using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board that features an Arria 10 FPGA device. The focus of the report is to have a better understanding of the resource usage and performance of the kernel implementations using Arria-10 FPGA devices compared to Stratix-5 FPGA devices. In addition, we also gain knowledge about the limitations of the current compiler when it fails to synthesize a benchmark application.« less

  14. Input-independent, Scalable and Fast String Matching on the Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Chavarría-Miranda, Daniel; Maschhoff, Kristyn J

    2009-05-25

    String searching is at the core of many security and network applications like search engines, intrusion detection systems, virus scanners and spam filters. The growing size of on-line content and the increasing wire speeds push the need for fast, and often real- time, string searching solutions. For these conditions, many software implementations (if not all) targeting conventional cache-based microprocessors do not perform well. They either exhibit overall low performance or exhibit highly variable performance depending on the types of inputs. For this reason, real-time state of the art solutions rely on the use of either custom hardware or Field-Programmable Gatemore » Arrays (FPGAs) at the expense of overall system flexibility and programmability. This paper presents a software based implementation of the Aho-Corasick string searching algorithm on the Cray XMT multithreaded shared memory machine. Our so- lution relies on the particular features of the XMT architecture and on several algorith- mic strategies: it is fast, scalable and its performance is virtually content-independent. On a 128-processor Cray XMT, it reaches a scanning speed of ≈ 28 Gbps with a performance variability below 10 %. In the 10 Gbps performance range, variability is below 2.5%. By comparison, an Intel dual-socket, 8-core system running at 2.66 GHz achieves a peak performance which varies from 500 Mbps to 10 Gbps depending on the type of input and dictionary size.« less

  15. Upgrade of Tile Calorimeter of the ATLAS Detector for the High Luminosity LHC.

    NASA Astrophysics Data System (ADS)

    Valdes Santurio, Eduardo; Tile Calorimeter System, ATLAS

    2017-11-01

    The Tile Calorimeter (TileCal) is the hadronic calorimeter of ATLAS covering the central region of the ATLAS experiment. TileCal is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are read out by wavelength shifting fibers coupled to photomultiplier tubes (PMT). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The High Luminosity Large Hadron Collider (HL-LHC) will have a peak luminosity of 5 × 1034 cm -2 s -1, five times higher than the design luminosity of the LHC. TileCal will undergo a major replacement of its on- and off-detector electronics for the high luminosity programme of the LHC in 2026. The calorimeter signals will be digitized and sent directly to the off-detector electronics, where the signals are reconstructed and shipped to the first level of trigger at a rate of 40 MHz. This will provide a better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. Three different options are presently being investigated for the front-end electronic upgrade. Extensive test beam studies will determine which option will be selected. Field Programmable Gate Arrays (FPGAs) are extensively used for the logic functions of the off- and on-detector electronics. One hybrid demonstrator prototype module with the new calorimeter module electronics, but still compatible with the present system, may be inserted in ATLAS at the end of 2016.

  16. A new FPGA architecture suitable for DSP applications

    NASA Astrophysics Data System (ADS)

    Liyun, Wang; Jinmei, Lai; Jiarong, Tong; Pushan, Tang; Xing, Chen; Xueyan, Duan; Liguang, Chen; Jian, Wang; Yuan, Wang

    2011-05-01

    A new FPGA architecture suitable for digital signal processing applications is presented. DSP modules can be inserted into FPGA conveniently with the proposed architecture, which is much faster when used in the field of digital signal processing compared with traditional FPGAs. An advanced 2-level MUX (multiplexer) is also proposed. With the added SLEEP MODE PASS to traditional 2-level MUX, static leakage is reduced. Furthermore, buffers are inserted at early returns of long lines. With this kind of buffer, the delay of the long line is improved by 9.8% while the area increases by 4.37%. The layout of this architecture has been taped out in standard 0.13 μm CMOS technology successfully. The die size is 6.3 × 4.5 mm2 with the QFP208 package. Test results show that performances of presented classical DSP cases are improved by 28.6%-302% compared with traditional FPGAs.

  17. A novel approach to Hough Transform for implementation in fast triggers

    NASA Astrophysics Data System (ADS)

    Pozzobon, Nicola; Montecassiano, Fabio; Zotto, Pierluigi

    2016-10-01

    Telescopes of position sensitive detectors are common layouts in charged particles tracking, and programmable logic devices, such as FPGAs, represent a viable choice for the real-time reconstruction of track segments in such detector arrays. A compact implementation of the Hough Transform for fast triggers in High Energy Physics, exploiting a parameter reduction method, is proposed, targeting the reduction of the needed storage or computing resources in current, or next future, state-of-the-art FPGA devices, while retaining high resolution over a wide range of track parameters. The proposed approach is compared to a Standard Hough Transform with particular emphasis on their application to muon detectors. In both cases, an original readout implementation is modeled.

  18. Fast particles identification in programmable form at level-0 trigger by means of the 3D-Flow system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crosetto, Dario B.

    1998-10-30

    The 3D-Flow Processor system is a new, technology-independent concept in very fast, real-time system architectures. Based on either an FPGA or an ASIC implementation, it can address, in a fully programmable manner, applications where commercially available processors would fail because of throughput requirements. Possible applications include filtering-algorithms (pattern recognition) from the input of multiple sensors, as well as moving any input validated by these filtering-algorithms to a single output channel. Both operations can easily be implemented on a 3D-Flow system to achieve a real-time processing system with a very short lag time. This system can be built either with off-the-shelfmore » FPGAs or, for higher data rates, with CMOS chips containing 4 to 16 processors each. The basic building block of the system, a 3D-Flow processor, has been successfully designed in VHDL code written in ''Generic HDL'' (mostly made of reusable blocks that are synthesizable in different technologies, or FPGAs), to produce a netlist for a four-processor ASIC featuring 0.35 micron CBA (Ceil Base Array) technology at 3.3 Volts, 884 mW power dissipation at 60 MHz and 63.75 mm sq. die size. The same VHDL code has been targeted to three FPGA manufacturers (Altera EPF10K250A, ORCA-Lucent Technologies 0R3T165 and Xilinx XCV1000). A complete set of software tools, the 3D-Flow System Manager, equally applicable to ASIC or FPGA implementations, has been produced to provide full system simulation, application development, real-time monitoring, and run-time fault recovery. Today's technology can accommodate 16 processors per chip in a medium size die, at a cost per processor of less than $5 based on the current silicon die/size technology cost.« less

  19. Using Spare Logic Resources To Create Dynamic Test Points

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2011-01-01

    A technique has been devised to enable creation of a dynamic set of test points in an embedded digital electronic system. As a result, electronics contained in an application specific circuit [e.g., gate array, field programmable gate array (FPGA)] can be internally probed, even when contained in a closed housing during all phases of test. In the present technique, the test points are not fixed and limited to a small number; the number of test points can vastly exceed the number of buffers or pins, resulting in a compact footprint. Test points are selected by means of spare logic resources within the ASIC(s) and/or FPGA(s). A register is programmed with a command, which is used to select the signals that are sent off-chip and out of the housing for monitoring by test engineers and external test equipment. The register can be commanded by any suitable means: for example, it could be commanded through a command port that would normally be used in the operation of the system. In the original application of the technique, commanding of the register is performed via a MIL-STD-1553B communication subsystem.

  20. Readout, first- and second-level triggers of the new Belle silicon vertex detector

    NASA Astrophysics Data System (ADS)

    Friedl, M.; Abe, R.; Abe, T.; Aihara, H.; Asano, Y.; Aso, T.; Bakich, A.; Browder, T.; Chang, M. C.; Chao, Y.; Chen, K. F.; Chidzik, S.; Dalseno, J.; Dowd, R.; Dragic, J.; Everton, C. W.; Fernholz, R.; Fujii, H.; Gao, Z. W.; Gordon, A.; Guo, Y. N.; Haba, J.; Hara, K.; Hara, T.; Harada, Y.; Haruyama, T.; Hasuko, K.; Hayashi, K.; Hazumi, M.; Heenan, E. M.; Higuchi, T.; Hirai, H.; Hitomi, N.; Igarashi, A.; Igarashi, Y.; Ikeda, H.; Ishino, H.; Itoh, K.; Iwaida, S.; Kaneko, J.; Kapusta, P.; Karawatzki, R.; Kasami, K.; Kawai, H.; Kawasaki, T.; Kibayashi, A.; Koike, S.; Korpar, S.; Križan, P.; Kurashiro, H.; Kusaka, A.; Lesiak, T.; Limosani, A.; Lin, W. C.; Marlow, D.; Matsumoto, H.; Mikami, Y.; Miyake, H.; Moloney, G. R.; Mori, T.; Nakadaira, T.; Nakano, Y.; Natkaniec, Z.; Nozaki, S.; Ohkubo, R.; Ohno, F.; Okuno, S.; Onuki, Y.; Ostrowicz, W.; Ozaki, H.; Peak, L.; Pernicka, M.; Rosen, M.; Rozanska, M.; Sato, N.; Schmid, S.; Shibata, T.; Stamen, R.; Stanič, S.; Steininger, H.; Sumisawa, K.; Suzuki, J.; Tajima, H.; Tajima, O.; Takahashi, K.; Takasaki, F.; Tamura, N.; Tanaka, M.; Taylor, G. N.; Terazaki, H.; Tomura, T.; Trabelsi, K.; Trischuk, W.; Tsuboyama, T.; Uchida, K.; Ueno, K.; Ueno, K.; Uozaki, N.; Ushiroda, Y.; Vahsen, S.; Varner, G.; Varvell, K.; Velikzhanin, Y. S.; Wang, C. C.; Wang, M. Z.; Watanabe, M.; Watanabe, Y.; Yamada, Y.; Yamamoto, H.; Yamashita, Y.; Yamashita, Y.; Yamauchi, M.; Yanai, H.; Yang, R.; Yasu, Y.; Yokoyama, M.; Ziegler, T.; Žontar, D.

    2004-12-01

    A major upgrade of the Silicon Vertex Detector (SVD 2.0) of the Belle experiment at the KEKB factory was installed along with new front-end and back-end electronics systems during the summer shutdown period in 2003 to cope with higher particle rates, improve the track resolution and meet the increasing requirements of radiation tolerance. The SVD 2.0 detector modules are read out by VA1TA chips which provide "fast or" (hit) signals that are combined by the back-end FADCTF modules to coarse, but immediate level 0 track trigger signals at rates of several tens of a kHz. Moreover, the digitized detector signals are compared to threshold lookup tables in the FADCTFs to pass on hit information on a single strip basis to the subsequent level 1.5 trigger system, which reduces the rate below the kHz range. Both FADCTF and level 1.5 electronics make use of parallel real-time processing in Field Programmable Gate Arrays (FPGAs), while further data acquisition and event building is done by PC farms running Linux. The new readout system hardware is described and the first results obtained with cosmics are shown.

  1. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  2. A New Model Based on Adaptation of the External Loop to Compensate the Hysteresis of Tactile Sensors

    PubMed Central

    Sánchez-Durán, José A.; Vidal-Verdú, Fernando; Oballe-Peinado, Óscar; Castellanos-Ramos, Julián; Hidalgo-López, José A.

    2015-01-01

    This paper presents a novel method to compensate for hysteresis nonlinearities observed in the response of a tactile sensor. The External Loop Adaptation Method (ELAM) performs a piecewise linear mapping of the experimentally measured external curves of the hysteresis loop to obtain all possible internal cycles. The optimal division of the input interval where the curve is approximated is provided by the error minimization algorithm. This process is carried out off line and provides parameters to compute the split point in real time. A different linear transformation is then performed at the left and right of this point and a more precise fitting is achieved. The models obtained with the ELAM method are compared with those obtained from three other approaches. The results show that the ELAM method achieves a more accurate fitting. Moreover, the involved mathematical operations are simpler and therefore easier to implement in devices such as Field Programmable Gate Array (FPGAs) for real time applications. Furthermore, the method needs to identify fewer parameters and requires no previous selection process of operators or functions. Finally, the method can be applied to other sensors or actuators with complex hysteresis loop shapes. PMID:26501279

  3. An Evaluation of Flash Cells Used in Critical Applications

    NASA Technical Reports Server (NTRS)

    Katz, Rich; Flowers, David; Bergevin, Keith

    2016-01-01

    Due to the common use of Flash technology in many commercial and industrial Programmable Logic Devices (PLDs) such as FPGAs and mixed-signal microcontrollers, flash technology is being utilized in fuzed munition applications. This presents a long-term reliability issue for both DoD and NASA safety- and mission-critical applications. A thorough understanding of the data retention failure modes and statistics associated with Flash data retention is of vital concern to the fuze safety community. A key retention parameter for a flash cell is the threshold voltage (VTH), which is an indirect indicator of the amount of charge stored on the cells floating gate. Initial test results based on a study of charge loss in flash cells in an FPGA device is presented. Statistical data taken from a small sample set indicates quantifiable charge loss for devices stored at both room temperature and 150 C. Initial evaluation of the distribution of threshold voltage in a large sample set (800 devices) is presented. The magnitude of charge loss from exposure to electrostatic discharge and electromagnetic fields is measured and presented. Simulated data (and measured data as available) resultant from harsh-environment testing (neutron, heavy ion, EMP) is presented.

  4. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  5. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    PubMed

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  6. A 256-channel, high throughput and precision time-to-digital converter with a decomposition encoding scheme in a Kintex-7 FPGA

    NASA Astrophysics Data System (ADS)

    Song, Z.; Wang, Y.; Kuang, J.

    2018-05-01

    Field Programmable Gate Arrays (FPGAs) made with 28 nm and more advanced process technology have great potentials for implementation of high precision time-to-digital convertors (TDC), because the delay cells in the tapped delay line (TDL) used for time interpolation are getting smaller and smaller. However, the bubble problems in the TDL status are becoming more complicated, which make it difficult to achieve TDCs on these chips with a high time precision. In this paper, we are proposing a novel decomposition encoding scheme, which not only can solve the bubble problem easily, but also has a high encoding efficiency. The potential of these chips to realize TDC can be fully released with the scheme. In a Xilinx Kintex-7 FPGA chip, we implemented a TDC system with 256 TDC channels, which doubles the number of TDC channels that our previous technique could achieve. Performances of all these TDC channels are evaluated. The average RMS time precision among them is 10.23 ps in the time-interval measurement range of (0–10 ns), and their measurement throughput reaches 277 M measures per second.

  7. A Timing Synchronizer System for Beam Test Setups Requiring Galvanic Isolation

    NASA Astrophysics Data System (ADS)

    Meder, Lukas Dominik; Emschermann, David; Frühauf, Jochen; Müller, Walter F. J.; Becker, Jürgen

    2017-07-01

    In beam test setups detector elements together with a readout composed of frontend electronics (FEE) and usually a layer of field-programmable gate arrays (FPGAs) are being analyzed. The FEE is in this scenario often directly connected to both the detector and the FPGA layer what in many cases requires sharing the ground potentials of these layers. This setup can become problematic if parts of the detector need to be operated at different high-voltage potentials, since all of the FPGA boards need to receive a common clock and timing reference for getting the readout synchronized. Thus, for the context of the compressed baryonic matter experiment a versatile timing synchronizer (TS) system was designed providing galvanically isolated timing distribution links over twisted-pair cables. As an electrical interface the so-called timing data processing board FPGA mezzanine card was created for being mounted onto FPGA-based advanced mezzanine cards for mTCA.4 crates. The FPGA logic of the TS system connects to this card and can be monitored and controlled through IPBus slow-control links. Evaluations show that the system is capable of stably synchronizing the FPGA boards of a beam test setup being integrated into a hierarchical TS network.

  8. Development of digital sideband separating down-conversion for Yuan-Tseh Lee Array

    NASA Astrophysics Data System (ADS)

    Li, Chao-Te; Kubo, Derek; Cheng, Jen-Chieh; Kuroda, John; Srinivasan, Ranjani; Ho, Solomon; Guzzino, Kim; Chen, Ming-Tang

    2016-07-01

    This report presents a down-conversion method involving digital sideband separation for the Yuan-Tseh Lee Array (YTLA) to double the processing bandwidth. The receiver consists of a MMIC HEMT LNA front end operating at a wavelength of 3 mm, and sub-harmonic mixers that output signals at intermediate frequencies (IFs) of 2-18 GHz. The sideband separation scheme involves an analog 90° hybrid followed by two mixers that provide down-conversion of the IF signal to a pair of in-phase (I) and quadrature (Q) signals in baseband. The I and Q baseband signals are digitized using 5 Giga sample per second (Gsps) analog-to-digital converters (ADCs). A second hybrid is digitally implemented using field-programmable gate arrays (FPGAs) to produce two sidebands, each with a bandwidth of 1.6 GHz. The 2 x 1.6 GHz band can be tuned to cover any 3.6 GHz window within the aforementioned IF range of the array. Sideband rejection ratios (SRRs) above 20 dB can be obtained across the 3.6 GHz bandwidth by equalizing the power and delay between the I and Q baseband signals. Furthermore, SRRs above 30 dB can be achieved when calibration is applied.

  9. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    PubMed Central

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  10. Towards Real-time, On-board, Hardware-Supported Sensor and Software Health Management for Unmanned Aerial Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Rozier, Kristin Y.; Reinbacher, Thomas; Mengshoel, Ole J.; Mbaya, Timmy; Ippolito, Corey

    2013-01-01

    Unmanned aerial systems (UASs) can only be deployed if they can effectively complete their missions and respond to failures and uncertain environmental conditions while maintaining safety with respect to other aircraft as well as humans and property on the ground. In this paper, we design a real-time, on-board system health management (SHM) capability to continuously monitor sensors, software, and hardware components for detection and diagnosis of failures and violations of safety or performance rules during the flight of a UAS. Our approach to SHM is three-pronged, providing: (1) real-time monitoring of sensor and/or software signals; (2) signal analysis, preprocessing, and advanced on the- fly temporal and Bayesian probabilistic fault diagnosis; (3) an unobtrusive, lightweight, read-only, low-power realization using Field Programmable Gate Arrays (FPGAs) that avoids overburdening limited computing resources or costly re-certification of flight software due to instrumentation. Our implementation provides a novel approach of combining modular building blocks, integrating responsive runtime monitoring of temporal logic system safety requirements with model-based diagnosis and Bayesian network-based probabilistic analysis. We demonstrate this approach using actual data from the NASA Swift UAS, an experimental all-electric aircraft.

  11. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.

    PubMed

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2017-03-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor.

  12. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures

    PubMed Central

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2016-01-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O (n2) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor. PMID:28428829

  13. From OO to FPGA :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Stephen; Palsberg, Jens; Brooks, Jeffrey

    Consumer electronics today such as cell phones often have one or more low-power FPGAs to assist with energy-intensive operations in order to reduce overall energy consumption and increase battery life. However, current techniques for programming FPGAs require people to be specially trained to do so. Ideally, software engineers can more readily take advantage of the benefits FPGAs offer by being able to program them using their existing skills, a common one being object-oriented programming. However, traditional techniques for compiling object-oriented languages are at odds with todays FPGA tools, which support neither pointers nor complex data structures. Open until now ismore » the problem of compiling an object-oriented language to an FPGA in a way that harnesses this potential for huge energy savings. In this paper, we present a new compilation technique that feeds into an existing FPGA tool chain and produces FPGAs with up to almost an order of magnitude in energy savings compared to a low-power microprocessor while still retaining comparable performance and area usage.« less

  14. Manchester Coding Option for SpaceWire: Providing Choices for System Level Design

    NASA Technical Reports Server (NTRS)

    Rakow, Glenn; Kisin, Alex

    2014-01-01

    This paper proposes an optional coding scheme for SpaceWire in lieu of the current Data Strobe scheme for three reasons. First reason is to provide a straightforward method for electrical isolation of the interface; secondly to provide ability to reduce the mass and bend radius of the SpaceWire cable; and thirdly to provide a means for a common physical layer over which multiple spacecraft onboard data link protocols could operate for a wide range of data rates. The intent is to accomplish these goals without significant change to existing SpaceWire design investments. The ability to optionally use Manchester coding in place of the current Data Strobe coding provides the ability to DC balanced the signal transitions unlike the SpaceWire Data Strobe coding; and therefore the ability to isolate the electrical interface without concern. Additionally, because the Manchester code has the clock and data encoded on the same signal, the number of wires of the existing SpaceWire cable could be optionally reduced by 50. This reduction could be an important consideration for many users of SpaceWire as indicated by the already existing effort underway by the SpaceWire working group to reduce the cable mass and bend radius by elimination of shields. However, reducing the signal count by half would provide even greater gains. It is proposed to restrict the data rate for the optional Manchester coding to a fixed data rate of 10 Megabits per second (Mbps) in order to make the necessary changes simple and still able to run in current radiation tolerant Field Programmable Gate Arrays (FPGAs). Even with this constraint, 10 Mbps will meet many applications where SpaceWire is used. These include command and control applications and many instruments applications with have moderate data rate. For most NASA flight implementations, SpaceWire designs are in rad-tolerant FPGAs, and the desire to preserve the heritage design investment is important for cost and risk considerations. The Manchester coding option can be accommodated in existing designs with only changes to the FPGA.

  15. Digital Architecture for a Trace Gas Sensor Platform

    NASA Technical Reports Server (NTRS)

    Gonzales, Paula; Casias, Miguel; Vakhtin, Andrei; Pilgrim, Jeffrey

    2012-01-01

    A digital architecture has been implemented for a trace gas sensor platform, as a companion to standard analog control electronics, which accommodates optical absorption whose fractional absorbance equivalent would result in excess error if assumed to be linear. In cases where the absorption (1-transmission) is not equivalent to the fractional absorbance within a few percent error, it is necessary to accommodate the actual measured absorption while reporting the measured concentration of a target analyte with reasonable accuracy. This requires incorporation of programmable intelligence into the sensor platform so that flexible interpretation of the acquired data may be accomplished. Several different digital component architectures were tested and implemented. Commercial off-the-shelf digital electronics including data acquisition cards (DAQs), complex programmable logic devices (CPLDs), field-programmable gate arrays (FPGAs), and microcontrollers have been used to achieve the desired outcome. The most completely integrated architecture achieved during the project used the CPLD along with a microcontroller. The CPLD provides the initial digital demodulation of the raw sensor signal, and then communicates over a parallel communications interface with a microcontroller. The microcontroller analyzes the digital signal from the CPLD, and applies a non-linear correction obtained through extensive data analysis at the various relevant EVA operating pressures. The microcontroller then presents the quantitatively accurate carbon dioxide partial pressure regardless of optical density. This technique could extend the linear dynamic range of typical absorption spectrometers, particularly those whose low end noise equivalent absorbance is below one-part-in-100,000. In the EVA application, it allows introduction of a path-length-enhancing architecture whose optical interference effects are well understood and quantified without sacrificing the dynamic range that allows quantitative detection at the higher carbon dioxide partial pressures. The digital components are compact and allow reasonably complete integration with separately developed analog control electronics without sacrificing size, mass, or power draw.

  16. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  17. Design and implementation of digital controllers for smart structures using field-programmable gate arrays

    NASA Astrophysics Data System (ADS)

    Kelly, Jamie S.; Bowman, Hiroshi C.; Rao, Vittal S.; Pottinger, Hardy J.

    1997-06-01

    Implementation issues represent an unfamiliar challenge to most control engineers, and many techniques for controller design ignore these issues outright. Consequently, the design of controllers for smart structural systems usually proceeds without regard for their eventual implementation, thus resulting either in serious performance degradation or in hardware requirements that squander power, complicate integration, and drive up cost. The level of integration assumed by the Smart Patch further exacerbates these difficulties, and any design inefficiency may render the realization of a single-package sensor-controller-actuator system infeasible. The goal of this research is to automate the controller implementation process and to relieve the design engineer of implementation concerns like quantization, computational efficiency, and device selection. We specifically target Field Programmable Gate Arrays (FPGAs) as our hardware platform because these devices are highly flexible, power efficient, and reprogrammable. The current study develops an automated implementation sequence that minimizes hardware requirements while maintaining controller performance. Beginning with a state space representation of the controller, the sequence automatically generates a configuration bitstream for a suitable FPGA implementation. MATLAB functions optimize and simulate the control algorithm before translating it into the VHSIC hardware description language. These functions improve power efficiency and simplify integration in the final implementation by performing a linear transformation that renders the controller computationally friendly. The transformation favors sparse matrices in order to reduce multiply operations and the hardware necessary to support them; simultaneously, the remaining matrix elements take on values that minimize limit cycles and parameter sensitivity. The proposed controller design methodology is implemented on a simple cantilever beam test structure using FPGA hardware. The experimental closed loop response is compared with that of an automated FPGA controller implementation. Finally, we explore the integration of FPGA based controllers into a multi-chip module, which we believe represents the next step towards the realization of the Smart Patch.

  18. The NASA Electronic Parts and Packaging (NEPP) Program: Insertion of New Electronics Technologies

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.; Sampson, Michael J.

    2007-01-01

    This viewgraph presentation gives an overview of NASA Electronic Parts and Packaging (NEPP) Program's new electronics technology trends. The topics include: 1) The Changing World of Radiation Testing of Memories; 2) Even Application-Specific Tests are Costly!; 3) Hypothetical New Technology Part Qualification Cost; 4) Where we are; 5) Approaching FPGAs as a More Than a "Part" for Reliability; 6) FPGAs Beget Novel Radiation Test Setups; 7) Understanding the Complex Radiation Data; 8) Tracking Packaging Complexity and Reliability for FPGAs; 9) Devices Supporting the FPGA Need to be Considered; 10) Summary of the New Electronic Technologies and Insertion into Flight Programs Workshop; and 11) Highlights of Panel Notes and Comments

  19. A FPGA-based Measurement System for Nonvolatile Semiconductor Memory Characterization

    NASA Astrophysics Data System (ADS)

    Bu, Jiankang; White, Marvin

    2002-03-01

    Low voltage, long retention, high density SONOS nonvolatile semiconductor memory (NVSM) devices are ideally suited for PCMCIA, FLASH and 'smart' cards. The SONOS memory transistor requires characterization with an accurate, rapid measurement system with minimum disturbance to the device. The FPGA-based measurement system includes three parts: 1) a pattern generator implemented with XILINX FPGAs and corresponding software, 2) a high-speed, constant-current, threshold voltage detection circuit, 3) and a data evaluation program, implemented with a LABVIEW program. Fig. 1 shows the general block diagram of the FPGA-based measurement system. The function generator is designed and simulated with XILINX Foundation Software. Under the control of the specific erase/write/read pulses, the analog detect circuit applies operational modes to the SONOS device under test (DUT) and determines the change of the memory-state of the SONOS nonvolatile memory transistor. The TEK460 digitizes the analog threshold voltage output and sends to the PC computer. The data is filtered and averaged with a LABVIEWTM program running on the PC computer and displayed on the monitor in real time. We have implemented the pattern generator with XILINX FPGAs. Fig. 2 shows the block diagram of the pattern generator. We realized the logic control by a method of state machine design. Fig. 3 shows a small part of the state machine. The flexibility of the FPGAs enhances the capabilities of this system and allows measurement variations without hardware changes. The characterization of the nonvolatile memory transistor device under test (DUT), as function of programming voltage and time, is achieved by a high-speed, constant-current threshold voltage detection circuit. The analog detection circuit incorporating fast analog switches controlled digitally with the FPGAs. The schematic circuit diagram is shown in Fig. 4. The various operational modes for the DUT are realized with control signals applied to the analog switches (SW) as shown in Fig. 5. A LABVIEWTM program, on a PC platform, collects and processes the data. The data is displayed on the monitor in real time. This time-domain filtering reduces the digitizing error. Fig. 6 shows the data processing. SONOS nonvolatile semiconductor memories are characterized by erase/write, retention and endurance measurements. Fig. 7 shows the erase/write characteristics of an n-Channel, 5V prog-rammable SONOS memory transistor. Fig.8 shows the retention characteristic of the same SONOS transistor. We have used this system to characterize SONOS nonvolatile semiconductor memory transistors. The attractive features of the test system design lies in the cost-effectiveness and flexibility of the test pattern implementation, fast read-out of memory state, low power, high precision determination of the device threshold voltage, and perhaps most importantly, minimum disturbance, which is indispensable for nonvolatile memory characterization.

  20. Design and implementation of projects with Xilinx Zynq FPGA: a practical case

    NASA Astrophysics Data System (ADS)

    Travaglini, R.; D'Antone, I.; Meneghini, S.; Rignanese, L.; Zuffa, M.

    The main advantage when using FPGAs with embedded processors is the availability of additional several high-performance resources in the same physical device. Moreover, the FPGA programmability allows for connect custom peripherals. Xilinx have designed a programmable device named Zynq-7000 (simply called Zynq in the following), which integrates programmable logic (identical to the other Xilinx "serie 7" devices) with a System on Chip (SOC) based on two embedded ARM processors. Since both parts are deeply connected, the designers benefit from performance of hardware SOC and flexibility of programmability as well. In this paper a design developed by the Electronic Design Department at the Bologna Division of INFN will be presented as a practical case of project based on Zynq device. It is developed by using a commercial board called ZedBoard hosting a FMC mezzanine with a 12-bit 500 MS/s ADC. The Zynq FPGA on the ZedBoard receives digital outputs from the ADC and send them to the acquisition PC, after proper formatting, through a Gigabit Ethernet link. The major focus of the paper will be about the methodology to develop a Zynq-based design with the Xilinx Vivado software, enlightening how to configure the SOC and connect it with the programmable logic. Firmware design techniques will be presented: in particular both VHDL and IP core based strategies will be discussed. Further, the procedure to develop software for the embedded processor will be presented. Finally, some debugging tools, like the embedded Logic Analyzer, will be shown. Advantages and disadvantages with respect to adopting FPGA without embedded processors will be discussed.

  1. Fast Inference of Deep Neural Networks in FPGAs for Particle Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duarte, Javier; Han, Song; Harris, Philip

    Recent results at the Large Hadron Collider (LHC) have pointed to enhanced physics capabilities through the improvement of the real-time event processing techniques. Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of the use of such techniques in low-latency, low-power FPGA hardware has only just begun. FPGA-based trigger and data acquisition (DAQ) systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which wouldmore » enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson. While we focus on a specific example, the lessons are far-reaching. We develop a package based on High-Level Synthesis (HLS) called hls4ml to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to identify the problems in particle physics that would benefit from performing neural network inference with FPGAs. For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns.« less

  2. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  3. Hardware realization of an SVM algorithm implemented in FPGAs

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  4. Development of a scalable generic platform for adaptive optics real time control

    NASA Astrophysics Data System (ADS)

    Surendran, Avinash; Burse, Mahesh P.; Ramaprakash, A. N.; Parihar, Padmakar

    2015-06-01

    The main objective of the present project is to explore the viability of an adaptive optics control system based exclusively on Field Programmable Gate Arrays (FPGAs), making strong use of their parallel processing capability. In an Adaptive Optics (AO) system, the generation of the Deformable Mirror (DM) control voltages from the Wavefront Sensor (WFS) measurements is usually through the multiplication of the wavefront slopes with a predetermined reconstructor matrix. The ability to access several hundred hard multipliers and memories concurrently in an FPGA allows performance far beyond that of a modern CPU or GPU for tasks with a well-defined structure such as Adaptive Optics control. The target of the current project is to generate a signal for a real time wavefront correction, from the signals coming from a Wavefront Sensor, wherein the system would be flexible to accommodate all the current Wavefront Sensing techniques and also the different methods which are used for wavefront compensation. The system should also accommodate for different data transmission protocols (like Ethernet, USB, IEEE 1394 etc.) for transmitting data to and from the FPGA device, thus providing a more flexible platform for Adaptive Optics control. Preliminary simulation results for the formulation of the platform, and a design of a fully scalable slope computer is presented.

  5. Data acquisition system and ground calibration of polarized gamma-ray observer (PoGOLite)

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiromitsu; Chauvin, Maxime; Fukazawa, Yasushi; Jackson, Miranda; Kamae, Tuneyoshi; Kawano, Takafumi; Kiss, Mozsi; Kole, Merlin; Mikhalev, Victor; Mizuno, Tsunefumi; Moretti, Elena; Pearce, Mark; Rydström, Stefan

    2014-07-01

    The Polarized Gamma-ray Observer, PoGOLite, is a balloon experiment with the capability of detecting 10% polarization from a 200 mCrab celestial object between the energy-range 25-80 keV in one 6 hour flight. Polarization measurements in soft gamma-rays are expected to provide a powerful probe into high-energy emission mechanisms in/around neutron stars, black holes, supernova remnants, active-galactic nuclei etc. The "pathfinder" flight was performed in July 2013 for 14 days from Sweden to Russia. The polarization is measured using Compton scattering and photoelectric absorption in an array of 61 well-type phoswich detector cells (PDCs) for the pathfinder instrument. The PDCs are surrounded by 30 BGO crystals which form a side anti-coincidence shield (SAS) and passive polyethylene neutron shield. There is a neutron detector consisting of LiCaAlF6 (LiCAF) scintillator covered with BGOs to measure the background contribution of atmospheric neutrons. The data acquisition system treats 92 PMT signals from 61 PDCs + 30 SASs + 1 neutron detector, and it is developed based on SpaceWire spacecraft communication network. Most of the signal processing is done by digital circuits in Field Programmable Gate Arrays (FPGAs). This enables the reduction of the mass, the space and the power consumption. The performance was calibrated before the launch.

  6. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks

    PubMed Central

    Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi

    2017-01-01

    In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments. PMID:28293163

  7. Optimizations of a Hardware Decoder for Deep-Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon

    2007-01-01

    The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.

  8. FPGA-based smart sensor for drought stress detection in tomato plants using novel physiological variables and discrete wavelet transform.

    PubMed

    Duarte-Galvan, Carlos; Romero-Troncoso, Rene de J; Torres-Pacheco, Irineo; Guevara-Gonzalez, Ramon G; Fernandez-Jaramillo, Arturo A; Contreras-Medina, Luis M; Carrillo-Serrano, Roberto V; Millan-Almaraz, Jesus R

    2014-10-09

    Soil drought represents one of the most dangerous stresses for plants. It impacts the yield and quality of crops, and if it remains undetected for a long time, the entire crop could be lost. However, for some plants a certain amount of drought stress improves specific characteristics. In such cases, a device capable of detecting and quantifying the impact of drought stress in plants is desirable. This article focuses on testing if the monitoring of physiological process through a gas exchange methodology provides enough information to detect drought stress conditions in plants. The experiment consists of using a set of smart sensors based on Field Programmable Gate Arrays (FPGAs) to monitor a group of plants under controlled drought conditions. The main objective was to use different digital signal processing techniques such as the Discrete Wavelet Transform (DWT) to explore the response of plant physiological processes to drought. Also, an index-based methodology was utilized to compensate the spatial variation inside the greenhouse. As a result, differences between treatments were determined to be independent of climate variations inside the greenhouse. Finally, after using the DWT as digital filter, results demonstrated that the proposed system is capable to reject high frequency noise and to detect drought conditions.

  9. The next steps in Seti-Italia science and technology

    NASA Astrophysics Data System (ADS)

    Montebugnoli, Stelio; Cosmovici, Cristiano; Monari, Jader; Pluchino, Salvatore; Zoni, Luca; Bartolini, Marco; Orlati, Andrea; Salerno, Emma; Schillirò, Francesco; Pupillo, Giuseppe; Perini, Federico; Bianchi, Germano; Tani, Mattia; Amico, Leonardo

    2010-02-01

    The Italian Medicina Radioastronomy Station (nearby Bologna) is equipped with two antennas: the 32 mt (VLBI) dish and the Northern Cross, a large T-shaped parabolic/cylindrical antenna (30.000 sqm). So far Seti observations have been performed using a SERENDIP IV high resolution spectrometer connected to the VLBI dish in "piggy back" mode configuration. In order to facilitate data interpretation and to introduce innovative methods to search for possible extraterrestrial signals, we are planning to make use of the large UHF Northern Cross transit telescope. Sky observations performed at least within two months, could provide for each day a number of matrices labeled according to the observing sidereal time. The entire set of matrices will be characterized by an averaged spectrum on each row per day. Keeping constant the transit antenna declination, a coherent signal coming from a definite position of the sky, would produce a "flag on" in the same submatrix at the same sidereal time. Detections collected in this way could be considered "confirmed" since they always come from the same region of the sky and are observed regularly. An extremely powerful processing board based on a multi-FPGAs (Field Programmable Gate Array) core was developed and is now under programming. This is conceived to be the processing core for this new kind of investigations.

  10. Efficient Multiplexer FPGA Block Structures Based on G4FETs

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2009-01-01

    Generic structures have been conceived for multiplexer blocks to be implemented in field-programmable gate arrays (FPGAs) based on four-gate field-effect transistors (G(sup 4)FETs). This concept is a contribution to the continuing development of digital logic circuits based on G4FETs and serves as a further demonstration that logic circuits based on G(sup 4)FETs could be more efficient (in the sense that they could contain fewer transistors), relative to functionally equivalent logic circuits based on conventional transistors. Results in this line of development at earlier stages were summarized in two previous NASA Tech Briefs articles: "G(sup 4)FETs as Universal and Programmable Logic Gates" (NPO-41698), Vol. 31, No. 7 (July 2007), page 44, and "Efficient G4FET-Based Logic Circuits" (NPO-44407), Vol. 32, No. 1 ( January 2008), page 38 . As described in the first-mentioned previous article, a G4FET can be made to function as a three-input NOT-majority gate, which has been shown to be a universal and programmable logic gate. The universality and programmability could be exploited to design logic circuits containing fewer components than are required for conventional transistor-based circuits performing the same logic functions. The second-mentioned previous article reported results of a comparative study of NOT-majority-gate (G(sup 4)FET)-based logic-circuit designs and equivalent NOR- and NAND-gate-based designs utilizing conventional transistors. [NOT gates (inverters) were also included, as needed, in both the G(sup 4)FET- and the NOR- and NAND-based designs.] In most of the cases studied, fewer logic gates (and, hence, fewer transistors), were required in the G(sup 4)FET-based designs. There are two popular categories of FPGA block structures or architectures: one based on multiplexers, the other based on lookup tables. In standard multiplexer- based architectures, the basic building block is a tree-like configuration of multiplexers, with possibly a few additional logic gates such as ANDs or ORs. Interconnections are realized by means of programmable switches that may connect the input terminals of a block to output terminals of other blocks, may bridge together some of the inputs, or may connect some of the input terminals to signal sources representing constant logical levels 0 or 1. The left part of the figure depicts a four-to-one G(sup 4)FET-based multiplexer tree; the right part of the figure depicts a functionally equivalent four-to-one multiplexer based on conventional transistors. The G(sup 4)FET version would contains 54 transistors; the conventional version contains 70 transistors.

  11. Design for Security Workshop

    DTIC Science & Technology

    2014-09-30

    fingerprint sensor etc.  Secure application execution  Trust established outwards  With normal world apps  With internet/cloud apps...Xilinx Zynq Security Components and Capabilities © Copyright 2014 Xilinx . Security Features Inherited from FPGAs Zynq Secure Boot TrustZone...2014 Xilinx . Security Features Inherited from FPGAs Zynq Secure Boot TrustZone Integration 4 Agenda © Copyright 2014 Xilinx . Device DNA and User

  12. FPGA-based GEM detector signal acquisition for SXR spectroscopy system

    NASA Astrophysics Data System (ADS)

    Wojenski, A.; Pozniak, K. T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R.; Zabolotny, W.; Chernyshova, M.; Czarski, T.; Malinowski, K.

    2016-11-01

    The presented work is related to the Gas Electron Multiplier (GEM) detector soft X-ray spectroscopy system for tokamak applications. The used GEM detector has one-dimensional, 128 channel readout structure. The channels are connected to the radiation-hard electronics with configurable analog stage and fast ADCs, supporting speeds of 125 MSPS for each channel. The digitalized data is sent directly to the FPGAs using fast serial links. The preprocessing algorithms are implemented in the FPGAs, with the data buffering made in the on-board 2Gb DDR3 memory chips. After the algorithmic stage, the data is sent to the Intel Xeon-based PC for further postprocessing using PCI-Express link Gen 2. For connection of multiple FPGAs, PCI-Express switch 8-to-1 was designed. The whole system can support up to 2048 analog channels. The scope of the work is an FPGA-based implementation of the recorder of the raw signal from GEM detector. Since the system will work in a very challenging environment (neutron radiation, intense electro-magnetic fields), the registered signals from the GEM detector can be corrupted. In the case of the very intense hot plasma radiation (e.g. laser generated plasma), the registered signals can overlap. Therefore, it is valuable to register the raw signals from the GEM detector with high number of events during soft X-ray radiation. The signal analysis will have the direct impact on the implementation of photon energy computation algorithms. As the result, the system will produce energy spectra and topological distribution of soft X-ray radiation. The advanced software was developed in order to perform complex system startup and monitoring of hardware units. Using the array of two one-dimensional GEM detectors it will be possible to perform tomographic reconstruction of plasma impurities radiation in the SXR region.

  13. A wide-range programmable frequency synthesizer based on a finite state machine filter

    NASA Astrophysics Data System (ADS)

    Alser, Mohammed H.; Assaad, Maher M.; Hussin, Fawnizu A.

    2013-11-01

    In this article, an FPGA-based design and implementation of a fully digital wide-range programmable frequency synthesizer based on a finite state machine filter is presented. The advantages of the proposed architecture are that, it simultaneously generates a high frequency signal from a low frequency reference signal (i.e. synthesising), and synchronising the two signals (signals have the same phase, or a constant difference) without jitter accumulation issue. The architecture is portable and can be easily implemented for various platforms, such as FPGAs and integrated circuits. The frequency synthesizer circuit can be used as a part of SERDES devices in intra/inter chip communication in system-on-chip (SoC). The proposed circuit is designed using Verilog language and synthesized for the Altera DE2-70 development board, with the Cyclone II (EP2C35F672C6) device on board. Simulation and experimental results are included; they prove the synthesizing and tracking features of the proposed architecture. The generated clock signal frequency of a range from 19.8 MHz to 440 MHz is synchronized to the input reference clock with a frequency step of 0.12 MHz.

  14. Gaining Insight Into Femtosecond-scale CMOS Effects using FPGAs

    DTIC Science & Technology

    2015-03-24

    paths or detecting gross path delay faults , but for characterizing subtle aging effects, there is a need to isolate very short paths and detect very...data using COTS FPGAs and novel self-test. Hardware experiments using a 28 nm FPGA demonstrate isolation of small sets of transistors, detection of...hold the static configuration data specifying the LUT function. A set of inverters drive the SRAM contents into a pass-gate multiplexor tree; we

  15. Three-Function Logic Gate Controlled by Analog Voltage

    NASA Technical Reports Server (NTRS)

    Zebulum, Ricardo; Stoica, Adrian

    2006-01-01

    The figure is a schematic diagram of a complementary metal oxide/semiconductor (CMOS) electronic circuit that performs one of three different logic functions, depending on the level of an externally applied control voltage, V(sub sel). Specifically, the circuit acts as A NAND gate at V(sub sel) = 0.0 V, A wire (the output equals one of the inputs) at V(sub sel) = 1.0 V, or An AND gate at V(sub sel) = -1.8 V. [The nominal power-supply potential (VDD) and logic "1" potential of this circuit is 1.8 V.] Like other multifunctional circuits described in several prior NASA Tech Briefs articles, this circuit was synthesized following an automated evolutionary approach that is so named because it is modeled partly after the repetitive trial-and-error process of biological evolution. An evolved circuit can be tested by computational simulation and/or tested in real hardware, and the results of the test can provide guidance for refining the design through further iteration. The evolutionary synthesis of electronic circuits can now be implemented by means of a software package Genetic Algorithms for Circuit Synthesis (GACS) that was developed specifically for this purpose. GACS was used to synthesize the present trifunctional circuit. As in the cases of other multifunctional circuits described in several prior NASA Tech Briefs articles, the multiple functionality of this circuit, the use of a single control voltage to select the function, and the automated evolutionary approach to synthesis all contribute synergistically to a combination of features that are potentially advantageous for the further development of robust, multiple-function logic circuits, including, especially, field-programmable gate arrays (FPGAs). These advantages include the following: This circuit contains only 9 transistors about half the number of transistors that would be needed to obtain equivalent NAND/wire/AND functionality by use of components from a standard digital design library. If multifunctional gates like this circuit were used in the place of the configurable logic blocks of present commercial FPGAs, it would be possible to change the functions of the resulting digital systems within shorter times. For example, by changing a single control voltage, one could change the function of thousands of FPGA cells within nanoseconds. In contrast, typically, the reconfiguration in a conventional FPGA by use of bits downloaded from look-up tables via a digital bus takes microseconds.

  16. Board Saver for Use with Developmental FPGAs

    NASA Technical Reports Server (NTRS)

    Berkun, Andrew

    2009-01-01

    A device denoted a board saver has been developed as a means of reducing wear and tear of a printed-circuit board onto which an antifuse field programmable gate array (FPGA) is to be eventually soldered permanently after a number of design iterations. The need for the board saver or a similar device arises because (1) antifuse-FPGA design iterations are common and (2) repeated soldering and unsoldering of FPGAs on the printed-circuit board to accommodate design iterations can wear out the printed-circuit board. The board saver is basically a solderable/unsolderable FPGA receptacle that is installed temporarily on the printed-circuit board. The board saver is, more specifically, a smaller, square-ring-shaped, printed-circuit board (see figure) that contains half via holes one for each contact pad along its periphery. As initially fabricated, the board saver is a wider ring containing full via holes, but then it is milled along its outer edges, cutting the via holes in half and laterally exposing their interiors. The board saver is positioned in registration with the designated FPGA footprint and each via hole is soldered to the outer portion of the corresponding FPGA contact pad on the first-mentioned printed-circuit board. The via-hole/contact joints can be inspected visually and can be easily unsoldered later. The square hole in the middle of the board saver is sized to accommodate the FPGA, and the thickness of the board saver is the same as that of the FPGA. Hence, when a non-final FPGA is placed in the square hole, the combination of the non-final FPGA and the board saver occupy no more area and thickness than would a final FPGA soldered directly into its designated position on the first-mentioned circuit board. The contact leads of a non-final FPGA are not bent and are soldered, at the top of the board saver, to the corresponding via holes. A non-final FPGA can readily be unsoldered from the board saver and replaced by another one. Once the final FPGA design has been determined, the board saver can be unsoldered from the contact pads on the first-mentioned printed-circuit board and replaced by the final FPGA.

  17. Extending the BEAGLE library to a multi-FPGA platform.

    PubMed

    Jin, Zheming; Bakos, Jason D

    2013-01-19

    Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.

  18. Energy reduction through voltage scaling and lightweight checking

    NASA Astrophysics Data System (ADS)

    Kadric, Edin

    As the semiconductor roadmap reaches smaller feature sizes and the end of Dennard Scaling, design goals change, and managing the power envelope often dominates delay minimization. Voltage scaling remains a powerful tool to reduce energy. We find that it results in about 60% geomean energy reduction on top of other common low-energy optimizations with 22nm CMOS technology. However, when voltage is reduced, it becomes easier for noise and particle strikes to upset a node, potentially causing Silent Data Corruption (SDC). The 60% energy reduction, therefore, comes with a significant drop in reliability. Duplication with checking and triple-modular redundancy are traditional approaches used to combat transient errors, but spending 2--3x the energy for redundant computation can diminish or reverse the benefits of voltage scaling. As an alternative, we explore the opportunity to use checking operations that are cheaper than the base computation they are guarding. We devise a classification system for applications and their lightweight checking characteristics. In particular, we identify and evaluate the effectiveness of lightweight checks in a broad set of common tasks in scientific computing and signal processing. We find that the lightweight checks cost only a fraction of the base computation (0-25%) and allow us to recover the reliability losses from voltage scaling. Overall, we show about 50% net energy reduction without compromising reliability compared to operation at the nominal voltage. We use FPGAs (Field-Programmable Gate Arrays) in our work, although the same ideas can be applied to different systems. On top of voltage scaling, we explore other common low-energy techniques for FPGAs: transmission gates, gate boosting, power gating, low-leakage (high-Vth) processes, and dual-V dd architectures. We do not scale voltage for memories, so lower voltages help us reduce logic and interconnect energy, but not memory energy. At lower voltages, memories become dominant, and we get diminishing returns from continuing to scale voltage. To ensure that memories do not become a bottleneck, we also design an energy-robust FPGA memory architecture, which attempts to minimize communication energy due to mismatches between application and architecture. We do this alongside application parallelism tuning. We show our techniques on a wide range of applications, including a large real-time system used for Wide-Area Motion Imaging (WAMI).

  19. Multisensory architectures for action-oriented perception

    NASA Astrophysics Data System (ADS)

    Alba, L.; Arena, P.; De Fiore, S.; Listán, J.; Patané, L.; Salem, A.; Scordino, G.; Webb, B.

    2007-05-01

    In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for action-oriented perception applied to a legged robot is presented. An important problem we address is how to utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load, distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition and processing. This choice was made because FPGAs permit the implementation of customized digital logic blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the multi-sensory architecture proposed can achieve real time capabilities.

  20. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    PubMed Central

    Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S.; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun

    2011-01-01

    Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities. PMID:22164116

  1. Trigger design for a gamma ray detector of HIRFL-ETF

    NASA Astrophysics Data System (ADS)

    Du, Zhong-Wei; Su, Hong; Qian, Yi; Kong, Jie

    2013-10-01

    The Gamma Ray Array Detector (GRAD) is one subsystem of HIRFL-ETF (the External Target Facility (ETF) of the Heavy Ion Research Facility in Lanzhou (HIRFL)). It is capable of measuring the energy of gamma-rays with 1024 CsI scintillators in in-beam nuclear experiments. The GRAD trigger should select the valid events and reject the data from the scintillators which are not hit by the gamma-ray. The GRAD trigger has been developed based on the Field Programmable Gate Array (FPGAs) and PXI interface. It makes prompt trigger decisions to select valid events by processing the hit signals from the 1024 CsI scintillators. According to the physical requirements, the GRAD trigger module supplies 12-bit trigger information for the global trigger system of ETF and supplies a trigger signal for data acquisition (DAQ) system of GRAD. In addition, the GRAD trigger generates trigger data that are packed and transmitted to the host computer via PXI bus to be saved for off-line analysis. The trigger processing is implemented in the front-end electronics of GRAD and one FPGA of the GRAD trigger module. The logic of PXI transmission and reconfiguration is implemented in another FPGA of the GRAD trigger module. During the gamma-ray experiments, the GRAD trigger performs reliably and efficiently. The function of GRAD trigger is capable of satisfying the physical requirements.

  2. A real-time capable software-defined receiver using GPU for adaptive anti-jam GPS sensors.

    PubMed

    Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun

    2011-01-01

    Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities.

  3. A modular design for rapid-response telecoms and navigation missions

    NASA Astrophysics Data System (ADS)

    Davies, P.; Liddle, D.; Buckley, John; Sweeting, M.; Roussel-Dupre, Diane; Caffrey, Michael

    2004-11-01

    Surrey Satellite Technology Ltd and Los Alamos National Laboratory are together building the Cibola Flight Experiment (CFESat), a mission with the aim of flight-proving a reconfigurable processor payload intended for a Low Earth Orbit system. The mission will survey portions of the VHF and UHF radio spectra. The satellite will be launched by the Space Test Program in September 2006 on the USAF Evolved Expendable Launch Vehicle (EELV) using the EELV's Secondary Payload Adapter (ESPA) that allows up to six small satellites to be launched as "piggyback" passengers with larger spacecraft. The payload is based on networks of reprogrammable, Field Programmable Gate Arrays (FPGAs) to process the received signals for ionospheric and lightning studies. The objective is to validate the on-orbit use of commercial, reconfigurable FPGA technology utilizing several different single-event upset mitigation schemes. It will also detect and measure impulsive events that occur in a complex background. SSTL's satellite platform is based on a new, ESPA- compatible, structure housing subsystems and equipments with proven flight heritage from SSTL's disaster monitoring constellation (DMC) and the Topsat mission satellite due for launch in 2005. The structure is mechanically quite complex for a microsatellite having both deployed solar panels and a pair of long booms as part of the payload. The satellite design is highly constrained by the mass and volume requirements of the EELV/EPSA.

  4. Single-Scale Retinex Using Digital Signal Processors

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2005-01-01

    The Retinex is an image enhancement algorithm that improves the brightness, contrast and sharpness of an image. It performs a non-linear spatial/spectral transform that provides simultaneous dynamic range compression and color constancy. It has been used for a wide variety of applications ranging from aviation safety to general purpose photography. Many potential applications require the use of Retinex processing at video frame rates. This is difficult to achieve with general purpose processors because the algorithm contains a large number of complex computations and data transfers. In addition, many of these applications also constrain the potential architectures to embedded processors to save power, weight and cost. Thus we have focused on digital signal processors (DSPs) and field programmable gate arrays (FPGAs) as potential solutions for real-time Retinex processing. In previous efforts we attained a 21 (full) frame per second (fps) processing rate for the single-scale monochromatic Retinex with a TMS320C6711 DSP operating at 150 MHz. This was achieved after several significant code improvements and optimizations. Since then we have migrated our design to the slightly more powerful TMS320C6713 DSP and the fixed point TMS320DM642 DSP. In this paper we briefly discuss the Retinex algorithm, the performance of the algorithm executing on the TMS320C6713 and the TMS320DM642, and compare the results with the TMS320C6711.

  5. Real Time Coincidence Processing Algorithm for Geiger Mode LADAR using FPGAs

    DTIC Science & Technology

    2017-01-09

    Defense for Research and Engineering. Real Time Coincidence Processing Algorithm for Geiger-Mode Ladar using FPGAs Rufo A. Antonio1, Alexandru N...the first ever Geiger-mode ladar processing al- gorithm that is suitable for implementation on an FPGA enabling real time pro- cessing and data...developed embedded FPGA real time processing algorithms that take noisy raw data, streaming at upwards of 1GB/sec, and filters the data to obtain a near- ly

  6. Ultrasound phase rotation beamforming on multi-core DSP.

    PubMed

    Ma, Jieming; Karadayi, Kerem; Ali, Murtaza; Kim, Yongmin

    2014-01-01

    Phase rotation beamforming (PRBF) is a commonly-used digital receive beamforming technique. However, due to its high computational requirement, it has traditionally been supported by hardwired architectures, e.g., application-specific integrated circuits (ASICs) or more recently field-programmable gate arrays (FPGAs). In this study, we investigated the feasibility of supporting software-based PRBF on a multi-core DSP. To alleviate the high computing requirement, the analog front-end (AFE) chips integrating quadrature demodulation in addition to analog-to-digital conversion were defined and used. With these new AFE chips, only delay alignment and phase rotation need to be performed by DSP, substantially reducing the computational load. We implemented the delay alignment and phase rotation modules on a Texas Instruments C6678 DSP with 8 cores. We found it takes 200 μs to beamform 2048 samples from 64 channels using 2 cores. With 4 cores, 20 million samples can be beamformed in one second. Therefore, ADC frequencies up to 40 MHz with 2:1 decimation in AFE chips or up to 20 MHz with no decimation can be supported as long as the ADC-to-DSP I/O requirement can be met. The remaining 4 cores can work on back-end processing tasks and applications, e.g., color Doppler or ultrasound elastography. One DSP being able to handle both beamforming and back-end processing could lead to low-power and low-cost ultrasound machines, benefiting ultrasound imaging in general, particularly portable ultrasound machines. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.

    PubMed

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-30

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.

  8. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-01

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of −20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system. PMID:26840316

  9. Accelerating String Set Matching in FPGA Hardware for Bioinformatics Research

    PubMed Central

    Dandass, Yoginder S; Burgess, Shane C; Lawrence, Mark; Bridges, Susan M

    2008-01-01

    Background This paper describes techniques for accelerating the performance of the string set matching problem with particular emphasis on applications in computational proteomics. The process of matching peptide sequences against a genome translated in six reading frames is part of a proteogenomic mapping pipeline that is used as a case-study. The Aho-Corasick algorithm is adapted for execution in field programmable gate array (FPGA) devices in a manner that optimizes space and performance. In this approach, the traditional Aho-Corasick finite state machine (FSM) is split into smaller FSMs, operating in parallel, each of which matches up to 20 peptides in the input translated genome. Each of the smaller FSMs is further divided into five simpler FSMs such that each simple FSM operates on a single bit position in the input (five bits are sufficient for representing all amino acids and special symbols in protein sequences). Results This bit-split organization of the Aho-Corasick implementation enables efficient utilization of the limited random access memory (RAM) resources available in typical FPGAs. The use of on-chip RAM as opposed to FPGA logic resources for FSM implementation also enables rapid reconfiguration of the FPGA without the place and routing delays associated with complex digital designs. Conclusion Experimental results show storage efficiencies of over 80% for several data sets. Furthermore, the FPGA implementation executing at 100 MHz is nearly 20 times faster than an implementation of the traditional Aho-Corasick algorithm executing on a 2.67 GHz workstation. PMID:18412963

  10. An Anti-Electromagnetic Attack PUF Based on a Configurable Ring Oscillator for Wireless Sensor Networks

    PubMed Central

    Lu, Zhaojun; Li, Dongfang; Liu, Hailong; Gong, Mingyang; Liu, Zhenglin

    2017-01-01

    Wireless sensor networks (WSNs) are an emerging technology employed in some crucial applications. However, limited resources and physical exposure to attackers make security a challenging issue for a WSN. Ring oscillator-based physical unclonable function (RO PUF) is a potential option to protect the security of sensor nodes because it is able to generate random responses efficiently for a key extraction mechanism, which prevents the non-volatile memory from storing secret keys. In order to deploy RO PUF in a WSN, hardware efficiency, randomness, uniqueness, and reliability should be taken into account. Besides, the resistance to electromagnetic (EM) analysis attack is important to guarantee the security of RO PUF itself. In this paper, we propose a novel architecture of configurable RO PUF based on exclusive-or (XOR) gates. First, it dramatically increases the hardware efficiency compared with other types of RO PUFs. Second, it mitigates the vulnerability to EM analysis attack by placing the adjacent RO arrays in accordance with the cosine wave and sine wave so that the frequency of each RO cannot be detected. We implement our proposal in XINLINX A-7 field programmable gate arrays (FPGAs) and conduct a set of experiments to evaluate the quality of the responses. The results show that responses pass the National Institute of Standards and Technology (NIST) statistical test and have good uniqueness and reliability under different environments. Therefore, the proposed configurable RO PUF is suitable to establish a key extraction mechanism in a WSN. PMID:28914756

  11. Intelligent Hardware-Enabled Sensor and Software Safety and Health Management for Autonomous UAS

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Schumann, Johann; Ippolito, Corey

    2015-01-01

    Unmanned Aerial Systems (UAS) can only be deployed if they can effectively complete their mission and respond to failures and uncertain environmental conditions while maintaining safety with respect to other aircraft as well as humans and property on the ground. We propose to design a real-time, onboard system health management (SHM) capability to continuously monitor essential system components such as sensors, software, and hardware systems for detection and diagnosis of failures and violations of safety or performance rules during the ight of a UAS. Our approach to SHM is three-pronged, providing: (1) real-time monitoring of sensor and software signals; (2) signal analysis, preprocessing, and advanced on-the- y temporal and Bayesian probabilistic fault diagnosis; (3) an unobtrusive, lightweight, read-only, low-power hardware realization using Field Programmable Gate Arrays (FPGAs) in order to avoid overburdening limited computing resources or costly re-certi cation of ight software due to instrumentation. No currently available SHM capabilities (or combinations of currently existing SHM capabilities) come anywhere close to satisfying these three criteria yet NASA will require such intelligent, hardwareenabled sensor and software safety and health management for introducing autonomous UAS into the National Airspace System (NAS). We propose a novel approach of creating modular building blocks for combining responsive runtime monitoring of temporal logic system safety requirements with model-based diagnosis and Bayesian network-based probabilistic analysis. Our proposed research program includes both developing this novel approach and demonstrating its capabilities using the NASA Swift UAS as a demonstration platform.

  12. Modular biometric system

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Viazanko, Michael; O'Looney, Jimmy; Szu, Harold

    2009-04-01

    Modularity Biometric System (MBS) is an approach to support AiTR of the cooperated and/or non-cooperated standoff biometric in an area persistent surveillance. Advanced active and passive EOIR and RF sensor suite is not considered here. Neither will we consider the ROC, PD vs. FAR, versus the standoff POT in this paper. Our goal is to catch the "most wanted (MW)" two dozens, separately furthermore ad hoc woman MW class from man MW class, given their archrivals sparse front face data basis, by means of various new instantaneous input called probing faces. We present an advanced algorithm: mini-Max classifier, a sparse sample realization of Cramer-Rao Fisher bound of the Maximum Likelihood classifier that minimize the dispersions among the same woman classes and maximize the separation among different man-woman classes, based on the simple feature space of MIT Petland eigen-faces. The original aspect consists of a modular structured design approach at the system-level with multi-level architectures, multiple computing paradigms, and adaptable/evolvable techniques to allow for achieving a scalable structure in terms of biometric algorithms, identification quality, sensors, database complexity, database integration, and component heterogenity. MBS consist of a number of biometric technologies including fingerprints, vein maps, voice and face recognitions with innovative DSP algorithm, and their hardware implementations such as using Field Programmable Gate arrays (FPGAs). Biometric technologies and the composed modularity biometric system are significant for governmental agencies, enterprises, banks and all other organizations to protect people or control access to critical resources.

  13. FPGA implementation of sparse matrix algorithm for information retrieval

    NASA Astrophysics Data System (ADS)

    Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio

    2005-06-01

    Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.

  14. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  15. Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew

    2009-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.

  16. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.

  17. A 3.9 ps Time-Interval RMS Precision Time-to-Digital Converter Using a Dual-Sampling Method in an UltraScale FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2016-10-01

    Field programmable gate arrays (FPGAs) manufactured with more advanced processing technology have faster carry chains and smaller delay elements, which are favorable for the design of tapped delay line (TDL)-style time-to-digital converters (TDCs) in FPGA. However, new challenges are posed in using them to implement TDCs with a high time precision. In this paper, we propose a bin realignment method and a dual-sampling method for TDC implementation in a Xilinx UltraScale FPGA. The former realigns the disordered time delay taps so that the TDC precision can approach the limit of its delay granularity, while the latter doubles the number of taps in the delay line so that the TDC precision beyond the cell delay limitation can be expected. Two TDC channels were implemented in a Kintex UltraScale FPGA, and the effectiveness of the new methods was evaluated. For fixed time intervals in the range from 0 to 440 ns, the average RMS precision measured by the two TDC channels reaches 5.8 ps using the bin realignment, and it further improves to 3.9 ps by using the dual-sampling method. The time precision has a 5.6% variation in the measured temperature range. Every part of the TDC, including dual-sampling, encoding, and on-line calibration, could run at a 500 MHz clock frequency. The system measurement dead time is only 4 ns.

  18. The characterization and application of a low resource FPGA-based time to digital converter

    NASA Astrophysics Data System (ADS)

    Balla, Alessandro; Mario Beretta, Matteo; Ciambrone, Paolo; Gatta, Maurizio; Gonnella, Francesco; Iafolla, Lorenzo; Mascolo, Matteo; Messi, Roberto; Moricciani, Dario; Riondino, Domenico

    2014-03-01

    Time to Digital Converters (TDCs) are very common devices in particles physics experiments. A lot of "off-the-shelf" TDCs can be employed but the necessity of a custom DAta acQuisition (DAQ) system makes the TDCs implemented on the Field-Programmable Gate Arrays (FPGAs) desirable. Most of the architectures developed so far are based on the tapped delay lines with precision down to 10 ps, obtained with high FPGA resources usage and non-linearity issues to be managed. Often such precision is not necessary; in this case TDC architectures with low resources occupancy are preferable allowing the implementation of data processing systems and of other utilities on the same device. In order to reconstruct γγ physics events tagged with High Energy Tagger (HET) in the KLOE-2 (K LOng Experiment 2), we need to measure the Time Of Flight (TOF) of the electrons and positrons from the KLOE-2 Interaction Point (IP) to our tagging stations (11 m apart). The required resolution must be better than the bunch spacing (2.7 ns). We have developed and implemented on a Xilinx Virtex-5 FPGA a 32 channel TDC with a precision of 255 ps and low non-linearity effects along with an embedded data acquisition system and the interface to the online FARM of KLOE-2. The TDC is based on a low resources occupancy technique: the 4×Oversampling technique which, in this work, is pushed to its best resolution and its performances were exhaustively measured.

  19. An Anti-Electromagnetic Attack PUF Based on a Configurable Ring Oscillator for Wireless Sensor Networks.

    PubMed

    Lu, Zhaojun; Li, Dongfang; Liu, Hailong; Gong, Mingyang; Liu, Zhenglin

    2017-09-15

    Wireless sensor networks (WSNs) are an emerging technology employed in some crucial applications. However, limited resources and physical exposure to attackers make security a challenging issue for a WSN. Ring oscillator-based physical unclonable function (RO PUF) is a potential option to protect the security of sensor nodes because it is able to generate random responses efficiently for a key extraction mechanism, which prevents the non-volatile memory from storing secret keys. In order to deploy RO PUF in a WSN, hardware efficiency, randomness, uniqueness, and reliability should be taken into account. Besides, the resistance to electromagnetic (EM) analysis attack is important to guarantee the security of RO PUF itself. In this paper, we propose a novel architecture of configurable RO PUF based on exclusive-or (XOR) gates. First, it dramatically increases the hardware efficiency compared with other types of RO PUFs. Second, it mitigates the vulnerability to EM analysis attack by placing the adjacent RO arrays in accordance with the cosine wave and sine wave so that the frequency of each RO cannot be detected. We implement our proposal in XINLINX A-7 field programmable gate arrays (FPGAs) and conduct a set of experiments to evaluate the quality of the responses. The results show that responses pass the National Institute of Standards and Technology (NIST) statistical test and have good uniqueness and reliability under different environments. Therefore, the proposed configurable RO PUF is suitable to establish a key extraction mechanism in a WSN.

  20. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707

  1. A framework for porting the NeuroBayes machine learning algorithm to FPGAs

    NASA Astrophysics Data System (ADS)

    Baehr, S.; Sander, O.; Heck, M.; Feindt, M.; Becker, J.

    2016-01-01

    The NeuroBayes machine learning algorithm is deployed for online data reduction at the pixel detector of Belle II. In order to test, characterize and easily adapt its implementation on FPGAs, a framework was developed. Within the framework an HDL model, written in python using MyHDL, is used for fast exploration of possible configurations. Under usage of input data from physics simulations figures of merit like throughput, accuracy and resource demand of the implementation are evaluated in a fast and flexible way. Functional validation is supported by usage of unit tests and HDL simulation for chosen configurations.

  2. Study of heterogeneous and reconfigurable architectures in the communication domain

    NASA Astrophysics Data System (ADS)

    Feldkaemper, H. T.; Blume, H.; Noll, T. G.

    2003-05-01

    One of the most challenging design issues for next generations of (mobile) communication systems is fulfilling the computational demands while finding an appropriate trade-off between flexibility and implementation aspects, especially power consumption. Flexibility of modern architectures is desirable, e.g. concerning adaptation to new standards and reduction of time-to-market of a new product. Typical target architectures for future communication systems include embedded FPGAs, dedicated macros as well as programmable digital signal and control oriented processor cores as each of these has its specific advantages. These will be integrated as a System-on-Chip (SoC). For such a heterogeneous architecture a design space exploration and an appropriate partitioning plays a crucial role. On the exemplary vehicle of a Viterbi decoder as frequently used in communication systems we show which costs in terms of ATE complexity arise implementing typical components on different types of architecture blocks. A factor of about seven orders of magnitude spans between a physically optimised implementation and an implementation on a programmable DSP kernel. An implementation on an embedded FPGA kernel is in between these two representing an attractive compromise with high flexibility and low power consumption. Extending this comparison to further components, it is shown quantitatively that the cost ratio between different implementation alternatives is closely related to the operation to be performed. This information is essential for the appropriate partitioning of heterogeneous systems.

  3. Python based high-level synthesis compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard

    2014-11-01

    This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.

  4. Real-Time Processing System for the JET Hard X-Ray and Gamma-Ray Profile Monitor Enhancement

    NASA Astrophysics Data System (ADS)

    Fernandes, Ana M.; Pereira, Rita C.; Neto, André; Valcárcel, Daniel F.; Alves, Diogo; Sousa, Jorge; Carvalho, Bernardo B.; Kiptily, Vasily; Syme, Brian; Blanchard, Patrick; Murari, Andrea; Correia, Carlos M. B. A.; Varandas, Carlos A. F.; Gonçalves, Bruno

    2014-06-01

    The Joint European Torus (JET) is currently undertaking an enhancement program which includes tests of relevant diagnostics with real-time processing capabilities for the International Thermonuclear Experimental Reactor (ITER). Accordingly, a new real-time processing system was developed and installed at JET for the gamma-ray and hard X-ray profile monitor diagnostic. The new system is connected to 19 CsI(Tl) photodiodes in order to obtain the line-integrated profiles of the gamma-ray and hard X-ray emissions. Moreover, it was designed to overcome the former data acquisition (DAQ) limitations while exploiting the required real-time features. The new DAQ hardware, based on the Advanced Telecommunication Computer Architecture (ATCA) standard, includes reconfigurable digitizer modules with embedded field-programmable gate array (FPGA) devices capable of acquiring and simultaneously processing data in real-time from the 19 detectors. A suitable algorithm was developed and implemented in the FPGAs, which are able to deliver the corresponding energy of the acquired pulses. The processed data is sent periodically, during the discharge, through the JET real-time network and stored in the JET scientific databases at the end of the pulse. The interface between the ATCA digitizers, the JET control and data acquisition system (CODAS), and the JET real-time network is provided by the Multithreaded Application Real-Time executor (MARTe). The work developed allowed attaining two of the major milestones required by next fusion devices: the ability to process and simultaneously supply high volume data rates in real-time.

  5. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  6. TOT measurement implemented in FPGA TDC

    NASA Astrophysics Data System (ADS)

    Fan, Huan-Huan; Cao, Ping; Liu, Shu-Bin; An, Qi

    2015-11-01

    Time measurement plays a crucial role for the purpose of particle identification in high energy physics experiments. With increasingly demanding physics goals and the development of electronics, modern time measurement systems need to meet the requirement of excellent resolution specification as well as high integrity. Based on Field Programmable Gate Arrays (FPGAs), FPGA time-to-digital converters (TDCs) have become one of the most mature and prominent time measurement methods in recent years. For correcting the time-walk effect caused by leading timing, a time-over-threshold (TOT) measurement should be added to the FPGA TDC. TOT can be obtained by measuring the interval between the signal leading and trailing edges. Unfortunately, a traditional TDC can recognize only one kind of signal edge, the leading or the trailing. Generally, to measure the interval, two TDC channels need to be used at the same time, one for leading, the other for trailing. However, this method unavoidably increases the amount of FPGA resources used and reduces the TDC's integrity. This paper presents one method of TOT measurement implemented in a Xilinx Virtex-5 FPGA. In this method, TOT measurement can be achieved using only one TDC input channel. The consumed resources and time resolution can both be guaranteed. Testing shows that this TDC can achieve resolution better than 15ps for leading edge measurement and 37 ps for TOT measurement. Furthermore, the TDC measurement dead time is about two clock cycles, which makes it good for applications with higher physics event rates. Supported by National Natural Science Foundation of China (11079003, 10979003)

  7. Software Defined GPS Receiver for International Space Station

    NASA Technical Reports Server (NTRS)

    Duncan, Courtney B.; Robison, David E.; Koelewyn, Cynthia Lee

    2011-01-01

    JPL is providing a software defined radio (SDR) that will fly on the International Space Station (ISS) as part of the CoNNeCT project under NASA's SCaN program. The SDR consists of several modules including a Baseband Processor Module (BPM) and a GPS Module (GPSM). The BPM executes applications (waveforms) consisting of software components for the embedded SPARC processor and logic for two Virtex II Field Programmable Gate Arrays (FPGAs) that operate on data received from the GPSM. GPS waveforms on the SDR are enabled by an L-Band antenna, low noise amplifier (LNA), and the GPSM that performs quadrature downconversion at L1, L2, and L5. The GPS waveform for the JPL SDR will acquire and track L1 C/A, L2C, and L5 GPS signals from a CoNNeCT platform on ISS, providing the best GPS-based positioning of ISS achieved to date, the first use of multiple frequency GPS on ISS, and potentially the first L5 signal tracking from space. The system will also enable various radiometric investigations on ISS such as local multipath or ISS dynamic behavior characterization. In following the software-defined model, this work will create a highly portable GPS software and firmware package that can be adapted to another platform with the necessary processor and FPGA capability. This paper also describes ISS applications for the JPL CoNNeCT SDR GPS waveform, possibilities for future global navigation satellite system (GNSS) tracking development, and the applicability of the waveform components to other space navigation applications.

  8. Design space exploration of high throughput finite field multipliers for channel coding on Xilinx FPGAs

    NASA Astrophysics Data System (ADS)

    de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.

    2012-09-01

    Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.

  9. A Genetic Representation for Evolutionary Fault Recovery in Virtex FPGAs

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Larchev, Greg; DeMara, Ronald; Korsmeyer, David (Technical Monitor)

    2003-01-01

    Most evolutionary approaches to fault recovery in FPGAs focus on evolving alternative logic configurations as opposed to evolving the intra-cell routing. Since the majority of transistors in a typical FPGA are dedicated to interconnect, nearly 80% according to one estimate, evolutionary fault-recovery systems should benefit hy accommodating routing. In this paper, we propose an evolutionary fault-recovery system employing a genetic representation that takes into account both logic and routing configurations. Experiments were run using a software model of the Xilinx Virtex FPGA. We report that using four Virtex combinational logic blocks, we were able to evolve a 100% accurate quadrature decoder finite state machine in the presence of a stuck-at-zero fault.

  10. Semi-autonomous unmanned ground vehicle control system

    NASA Astrophysics Data System (ADS)

    Anderson, Jonathan; Lee, Dah-Jye; Schoenberger, Robert; Wei, Zhaoyi; Archibald, James

    2006-05-01

    Unmanned Ground Vehicles (UGVs) have advantages over people in a number of different applications, ranging from sentry duty, scouting hazardous areas, convoying goods and supplies over long distances, and exploring caves and tunnels. Despite recent advances in electronics, vision, artificial intelligence, and control technologies, fully autonomous UGVs are still far from being a reality. Currently, most UGVs are fielded using tele-operation with a human in the control loop. Using tele-operations, a user controls the UGV from the relative safety and comfort of a control station and sends commands to the UGV remotely. It is difficult for the user to issue higher level commands such as patrol this corridor or move to this position while avoiding obstacles. As computer vision algorithms are implemented in hardware, the UGV can easily become partially autonomous. As Field Programmable Gate Arrays (FPGAs) become larger and more powerful, vision algorithms can run at frame rate. With the rapid development of CMOS imagers for consumer electronics, frame rate can reach as high as 200 frames per second with a small size of the region of interest. This increase in the speed of vision algorithm processing allows the UGVs to become more autonomous, as they are able to recognize and avoid obstacles in their path, track targets, or move to a recognized area. The user is able to focus on giving broad supervisory commands and goals to the UGVs, allowing the user to control multiple UGVs at once while still maintaining the convenience of working from a central base station. In this paper, we will describe a novel control system for the control of semi-autonomous UGVs. This control system combines a user interface similar to a simple tele-operation station along with a control package, including the FPGA and multiple cameras. The control package interfaces with the UGV and provides the necessary control to guide the UGV.

  11. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  12. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  13. Dual Active Bridge based DC Transformer LabVIEW FPGA Control Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In the area of power electronics control, Field Programmable Gate Arrays (FPGAs) have the capability to outperform their Digital Signal Processor (DSP) counterparts due to the FPGA’s ability to implement true parallel processing and therefore facilitate higher switching frequencies, higher control bandwidth, and/or enhanced functionality. National Instruments (NI) has developed two platforms, Compact RIO (cRIO) and Single Board RIO (sbRIO), which combine a real-time processor with an FPGA. The FPGA can be programmed with a subset of the well-known LabVIEW graphical programming language. The candidate software implements complete control algorithms in LabVIEW FPGA for a DC Transformer (DCX) based onmore » a dual active bridge (DAB). A DCX is an isolated bi-directional DC-DC converter designed to operate at unity conversion ratio, M, defined by where Vin is the primary-side DC bus voltage, Vout is the secondary-side DC bus voltage, and n is the turns ratio of the embedded high frequency transformer (HFX). The DCX based on a DAB incorporates two H-bridges, a resonant inductor, and an HFX to provide this functionality. The candidate software employs phase-shift modulation of the two H-bridges and a feedback loop to regulate the conversion ratio at unity. The software also includes alarm-handling capabilities as well as debugging and tuning tools. The software fits on the Xilinx Virtex V LX110 FPGA embedded in the NI cRIO-9118 FPGA chassis, and with a 40 MHz base clock, supports a modulation update rate of 40 MHz, and user-settable switching frequencies and synchronized control loop update rates of tens of kHz.« less

  14. A high-efficiency real-time digital signal averager for time-of-flight mass spectrometry.

    PubMed

    Wang, Yinan; Xu, Hui; Li, Qingjiang; Li, Nan; Huang, Zhengxu; Zhou, Zhen; Liu, Husheng; Sun, Zhaolin; Xu, Xin; Yu, Hongqi; Liu, Haijun; Li, David D-U; Wang, Xi; Dong, Xiuzhen; Gao, Wei

    2013-05-30

    Analog-to-digital converter (ADC)-based acquisition systems are widely applied in time-of-flight mass spectrometers (TOFMS) due to their ability to record the signal intensity of all ions within the same pulse. However, the acquisition system raises the requirement for data throughput, along with increasing the conversion rate and resolution of the ADC. It is therefore of considerable interest to develop a high-performance real-time acquisition system, which can relieve the limitation of data throughput. We present in this work a high-efficiency real-time digital signal averager, consisting of a signal conditioner, a data conversion module and a signal processing module. Two optimization strategies are implemented using field programmable gate arrays (FPGAs) to enhance the efficiency of the real-time processing. A pipeline procedure is used to reduce the time consumption of the accumulation strategy. To realize continuous data transfer, a high-efficiency transmission strategy is developed, based on a ping-pong procedure. The digital signal averager features good responsiveness, analog bandwidth and dynamic performance. The optimal effective number of bits reaches 6.7 bits. For a 32 µs record length, the averager can realize 100% efficiency with an extraction frequency below 31.23 kHz by modifying the number of accumulation steps. In unit time, the averager yields superior signal-to-noise ratio (SNR) compared with data accumulation in a computer. The digital signal averager is combined with a vacuum ultraviolet single-photon ionization time-of-flight mass spectrometer (VUV-SPI-TOFMS). The efficiency of the real-time processing is tested by analyzing the volatile organic compounds (VOCs) from ordinary printed materials. In these experiments, 22 kinds of compounds are detected, and the dynamic range exceeds 3 orders of magnitude. Copyright © 2013 John Wiley & Sons, Ltd.

  15. FPGA-based digital signal processing for the next generation radio astronomy instruments: ultra-pure sideband separation and polarization detection

    NASA Astrophysics Data System (ADS)

    Alvear, Andrés.; Finger, Ricardo; Fuentes, Roberto; Sapunar, Raúl; Geelen, Tom; Curotto, Franco; Rodríguez, Rafael; Monasterio, David; Reyes, Nicolás.; Mena, Patricio; Bronfman, Leonardo

    2016-07-01

    Field Programmable Gate Arrays (FPGAs) capacity and Analog to Digital Converters (ADCs) speed have largely increased in the last decade. Nowadays we can find one million or more logic blocks (slices) as well as several thousand arithmetic units (ALUs/DSP) available on a single FPGA chip. We can also commercially procure ADC chips reaching 10 GSPS, with 8 bits resolution or more. This unprecedented power of computing hardware has allowed the digitalization of signal processes traditionally performed by analog components. In radio astronomy, the clearest example has been the development of digital sideband separating receivers which, by replacing the IF hybrid and calibrating the system imbalances, have exhibited a sideband rejection above 40dB; this is 20 to 30dB higher than traditional analog sideband separating (2SB) receivers. In Rodriguez et al.,1 and Finger et al.,2 we have demonstrated very high digital sideband separation at 3mm and 1mm wavelengths, using laboratory setups. We here show the first implementation of such technique with a 3mm receiver integrated into a telescope, where the calibration was performed by quasi-optical injection of the test tone in front of the Cassegrain antenna. We also reported progress in digital polarization synthesis, particularly in the implementation of a calibrated Digital Ortho-Mode Transducer (DOMT) based on the Morgan et al. proof of concept.3 They showed off- line synthesis of polarization with isolation higher than 40dB. We plan to implement a digital polarimeter in a real-time FPGA-based (ROACH-2) platform, to show ultra-pure polarization isolation in a non-stop integrating spectrometer.

  16. Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.

    2012-01-01

    Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.

  17. High-performance hardware implementation of a parallel database search engine for real-time peptide mass fingerprinting

    PubMed Central

    Bogdán, István A.; Rivers, Jenny; Beynon, Robert J.; Coca, Daniel

    2008-01-01

    Motivation: Peptide mass fingerprinting (PMF) is a method for protein identification in which a protein is fragmented by a defined cleavage protocol (usually proteolysis with trypsin), and the masses of these products constitute a ‘fingerprint’ that can be searched against theoretical fingerprints of all known proteins. In the first stage of PMF, the raw mass spectrometric data are processed to generate a peptide mass list. In the second stage this protein fingerprint is used to search a database of known proteins for the best protein match. Although current software solutions can typically deliver a match in a relatively short time, a system that can find a match in real time could change the way in which PMF is deployed and presented. In a paper published earlier we presented a hardware design of a raw mass spectra processor that, when implemented in Field Programmable Gate Array (FPGA) hardware, achieves almost 170-fold speed gain relative to a conventional software implementation running on a dual processor server. In this article we present a complementary hardware realization of a parallel database search engine that, when running on a Xilinx Virtex 2 FPGA at 100 MHz, delivers 1800-fold speed-up compared with an equivalent C software routine, running on a 3.06 GHz Xeon workstation. The inherent scalability of the design means that processing speed can be multiplied by deploying the design on multiple FPGAs. The database search processor and the mass spectra processor, running on a reconfigurable computing platform, provide a complete real-time PMF protein identification solution. Contact: d.coca@sheffield.ac.uk PMID:18453553

  18. Field Programmable Gate Array Control of Power Systems in Graduate Student Laboratories

    DTIC Science & Technology

    2008-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited FIELD PROGRAMMABLE...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Field Programmable Gate Array Control of Power Systems in Graduate Student...Electronics curriculum track is the development of a design center that explores Field Programmable Gate Array (FPGA) control of power electronics

  19. Integrated 3-D vision system for autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Hou, Kun M.; Shawky, Mohamed; Tu, Xiaowei

    1992-03-01

    Nowadays, autonomous vehicles have become a multidiscipline field. Its evolution is taking advantage of the recent technological progress in computer architectures. As the development tools became more sophisticated, the trend is being more specialized, or even dedicated architectures. In this paper, we will focus our interest on a parallel vision subsystem integrated in the overall system architecture. The system modules work in parallel, communicating through a hierarchical blackboard, an extension of the 'tuple space' from LINDA concepts, where they may exchange data or synchronization messages. The general purpose processing elements are of different skills, built around 40 MHz i860 Intel RISC processors for high level processing and pipelined systolic array processors based on PLAs or FPGAs for low-level processing.

  20. A radiation tolerant Data link board for the ATLAS Tile Cal upgrade

    NASA Astrophysics Data System (ADS)

    Åkerstedt, H.; Bohm, C.; Muschter, S.; Silverstein, S.; Valdes, E.

    2016-01-01

    This paper describes the latest, full-functionality revision of the high-speed data link board developed for the Phase-2 upgrade of ATLAS hadronic Tile Calorimeter. The link board design is highly redundant, with digital functionality implemented in two Xilinx Kintex-7 FPGAs, and two Molex QSFP+ electro-optic modules with uplinks run at 10 Gbps. The FPGAs are remotely configured through two radiation-hard CERN GBTx deserialisers (GBTx), which also provide the LHC-synchronous system clock. The redundant design eliminates virtually all single-point error modes, and a combination of triple-mode redundancy (TMR), internal and external scrubbing will provide adequate protection against radiation-induced errors. The small portion of the FPGA design that cannot be protected by TMR will be the dominant source of radiation-induced errors, even if that area is small.

  1. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  2. Adaptation of the Electra Radio to Support Multiple Receive Channels

    NASA Technical Reports Server (NTRS)

    Satorius, Edgar H.; Shah, Biren N.; Bruvold, Kristoffer N.; Bell, David J.

    2011-01-01

    Proposed future Mars missions plan communication between multiple assets (rovers). This paper presents the results of a study carried out to assess the potential adaptation of the Electra radio to a multi-channel transceiver. The basic concept is a Frequency Division multiplexing (FDM) communications scheme wherein different receiver architectures are examined. Options considered include: (1) multiple IF slices, A/D and FPGAs each programmed with an Electra baseband modem; (2) common IF but multiple A/Ds and FPGAs and (3) common IF, single A/D and single or multiple FPGAs programmed to accommodate the FDM signals. These options represent the usual tradeoff between analog and digital complexity. Given the space application, a common IF is preferable; however, multiple users present dynamic range challenges (e.g., near-far constraints) that would favor multiple IF slices (Option 1). Vice versa, with a common IF and multiple A/Ds (Option 2), individual AGC control of the A/Ds would be an important consideration. Option 3 would require a common AGC control strategy and would entail multiple digital down conversion paths within the FPGA. In this paper, both FDM parameters as well as the different Electra design options will be examined. In particular, signal channel spacing as a function of user data rates and transmit powers will be evaluated. In addition, tradeoffs between the different Electra design options will be presented with the ultimate goal of defining an augmented Electra radio architecture for potential future missions.

  3. Multi-Level Pre-Correlation RFI Flagging for Real-Time Implementation on UniBoard

    NASA Astrophysics Data System (ADS)

    Dumez-Viou, Cédric; Weber, Rodolphe; Ravier, Philippe

    2016-03-01

    Because of the denser active use of the spectrum, and because of radio telescopes higher sensitivity, radio frequency interference (RFI) mitigation has become a sensitive topic for current and future radio telescope designs. Even if quite sophisticated approaches have been proposed in the recent years, the majority of RFI mitigation operational procedures are based on post-correlation corrupted data flagging. Moreover, given the huge amount of data delivered by current and next generation radio telescopes, all these RFI detection procedures have to be at least automatic and, if possible, real-time. In this paper, the implementation of a real-time pre-correlation RFI detection and flagging procedure into generic high-performance computing platforms based on field programmable gate arrays (FPGA) is described, simulated and tested. One of these boards, UniBoard, developed under a Joint Research Activity in the RadioNet FP7 European programme is based on eight FPGAs interconnected by a high speed transceiver mesh. It provides up to 4 TMACs with ®Altera Stratix IV FPGA and 160 Gbps data rate for the input data stream. The proposed concept is to continuously monitor the data quality at different stages in the digital preprocessing pipeline between the antennas and the correlator, at the station level and the core level. In this way, the detectors are applied at stages where different time-frequency resolutions can be achieved and where the interference-to-noise ratio (INR) is maximum right before any dilution of RFI characteristics by subsequent channelizations or signal recombinations. The detection decisions could be linked to a RFI statistics database or could be attached to the data for later stage flagging. Considering the high in-out data rate in the pre-correlation stages, only real-time and go-through detectors (i.e. no iterative processing) can be implemented. In this paper, a real-time and adaptive detection scheme is described. An ongoing case study has been set up with the Electronic Multi-Beam Radio Astronomy Concept (EMBRACE) radio telescope facility at Nançay Observatory. The objective is to evaluate the performances of this concept in term of hardware complexity, detection efficiency and additional RFI metadata rate cost. The UniBoard implementation scheme is described.

  4. Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs

    DOE PAGES

    Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo; ...

    2015-12-17

    Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less

  5. Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo

    Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less

  6. Study, design and integration of an FPGA-based system for the time-of-flight calculation applied to PET equipment

    NASA Astrophysics Data System (ADS)

    Aguilar Talens, D. Albert

    Nuclear Medicine has undergone significant advances in recent years due to improvements in materials, electronics, software techniques, processing etc., which has allowed to considerably extend its application. One technique that has progressed in this area has been the Positron Emission Tomography (PET) based on a non-invasive method with its especial relevance in the evaluation of cancer diagnosis and assessment, among others. This system is based on the principle of data collection and processing from which images of the spatial and temporal distribution of the metabolic processes that are generated inside the body are obtained. The imaging system consists of a set of detectors, normally placed in a ring geometry, so that each one provides information about events that have occurred inside. One of the reasons that have significantly evolved in PET systems is the development of techniques to determine the Time-of-Flight (TOF) of the photons that are generated due to the annihilation of positrons with their antiparticle, the electron. Determining TOF allows one for a more precise location of the events that are generated inside the ring and, therefore, facilitates the task of image reconstruction that ultimately use the medical equipment for the diagnosis and/or treatment. This Thesis begins with the assumption of developing a system based on Field Programmable Gate Arrays (FPGAs) for the integration of a Time- to-Digital Converter (TDC) in order to precisely carry out time measurements. This would permit the estimation of the TOF of the gamma particles for subsequent application in PET systems. First of all, the environment for the application is introduced, justifying the need of the purposed system. Following, the basic principles of PET and the state-of-the-art of similar systems are introduced. Then, the principles of Time-of-Flight based on FPGAs are discussed, and the adopted scheme explained, going into detail in each of its parts. After the development, the initial time measurement results are presented, achieving time resolutions below 100 ps for multiple channels. Once characterized, the system is tested with a breast PET prototype, whose technology detectors are based on Position Sensitive PhotoMultiplier Tubes (PSPMTs), performing TOF measurements for different scenarios. After this point, tests based on two Silicon Photomultipliers (SiPMs) modules were carried out. SiPMs are immune to magnetic fields, among other advantages. This is an important feature since there is a significant interest in combining PET and Magnetic Resonances (MR). Each of the two detector modules used are composed of a single crystal pixel. The electronic conditioning circuits are designed, taking into account the most influential parameters in time resolution. After these results, an array of 144 SiPMs is tested, optimizing several parameters, which directly impact on the system performance. Having demonstrated the system capabilities, an optimization process is devised. On the one hand, TDC measurements are enhanced up to 40 ps of precision. On the other hand, a coincidence algorithm is developed, which is responsible of identifying detector pairs that have registered an event within certain time window. Finally, the Thesis conclusions and the future work are presented, followed by the references. A list of publications and attended congresses are also provided.

  7. A FPGA-based Cluster Finder for CMOS Monolithic Active Pixel Sensors of the MIMOSA-26 Family

    NASA Astrophysics Data System (ADS)

    Li, Qiyan; Amar-Youcef, S.; Doering, D.; Deveaux, M.; Fröhlich, I.; Koziel, M.; Krebs, E.; Linnik, B.; Michel, J.; Milanovic, B.; Müntz, C.; Stroth, J.; Tischler, T.

    2014-06-01

    CMOS Monolithic Active Pixel Sensors (MAPS) demonstrated excellent performances in the field of charged particle tracking. Among their strong points are an single point resolution few μm, a light material budget of 0.05% X0 in combination with a good radiation tolerance and high rate capability. Those features make the sensors a valuable technology for vertex detectors of various experiments in heavy ion and particle physics. To reduce the load on the event builders and future mass storage systems, we have developed algorithms suited for preprocessing and reducing the data streams generated by the MAPS. This real-time processing employs remaining free resources of the FPGAs of the readout controllers of the detector and complements the on-chip data reduction circuits of the MAPS.

  8. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    NASA Astrophysics Data System (ADS)

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  9. Critical Information Protection on FPGAs through Unique Device Specific Keys

    DTIC Science & Technology

    2011-09-01

    63 Appendix B ...64 B .1 Analysis of Circuit DNA Entry Changes Across a Large Temperature Range ..... 64 Appendix C...71 x List of Figures Figure 1. (a) An ideal transistor design. ( b ) SEM image of Transistor

  10. FPGAs and HPC

    DTIC Science & Technology

    2007-01-01

    Ridge Technology, internal unpublished document. 10. Byoungro, S.; Diniz , P. C.; Hall, M. W. Using Estimates From Behavioral Synthesis Tools in...WIERSCHKE OLAC PL/RKFE 10 E SATURN BLVD EDWARDS AFB CA 93524-7680 1 NVL RSRCH LAB D PAPCONSTANTOPOULOS WASHINGTON DC 20375-5000 1

  11. RHrFPGA Radiation-Hardened Re-programmable Field-Programmable Gate Array

    NASA Technical Reports Server (NTRS)

    Sanders, A. B.; LaBel, K. A.; McCabe, J. F.; Gardner, G. A.; Lintz, J.; Ross, C.; Golke, K.; Burns, B.; Carts, M. A.; Kim, H. S.

    2004-01-01

    Viewgraphs on the development of the Radiation-Hardened Re-programmable Field-Programmable Gate Array (RHrFPGA) are presented. The topics include: 1) Radiation Test Suite; 2) Testing Interface; 3) Test Configuration; 4) Facilities; 5) Test Programs; 6) Test Procedure; and 7) Test Results. A summary of heavy ion and proton testing is also included.

  12. Susceptibility of Redundant Versus Singular Clock Domains Implemented in SRAM-Based FPGA TMR Designs

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; LaBel, Kenneth A.; Pellish, Jonathan

    2016-01-01

    We present the challenges that arise when using redundant clock domains due to their clock-skew. Radiation data show that a singular clock domain (DTMR) provides an improved TMR methodology for SRAM-based FPGAs over redundant clocks.

  13. Tradeoffs in Flight Design Upset Mitigation in State of the Art FPGAs: Hardened by Design vs. Design Level Hardening

    NASA Technical Reports Server (NTRS)

    Swift, Gary M.; Roosta, Ramin

    2004-01-01

    This presentation compares and contrasts the effectiveness and the system/designer impacts of the two main approaches to upset hardening: the Actel approach (RTSX-S and RTAX-S) of low-level (inside each flip-flop) triplication and the Xilinx approach (Virtex and Virtex2) of design-level triplication of both functional blocks and voters. The effectiveness of these approaches is compared using measurements made in conjunction with each of the FPGAs' manufacturer: for Actel, published data [1] and for Xilinx, recent results from the Xilinx SEE Test Consortium (note that the author is an active and founding member). The impacts involve Actel advantages in the areas of transistor-utilization efficiency and minimizing designer involvement in the triplication while the Xilinx advantages relate to the ability to custom tailor upset hardness and the flexibility of re-configurability. Additionally, there are currently clear Xilinx advantages in available features such as the number of I/O's, logic cells, and RAM blocks as well as speed. However, the advantage of the Actel anti-fuses for configuration over the Xilinx SRAM cells is that the latter need additional functionality and external circuitry (PROMs and, at least a watchdog timer) for configuration and configuration scrubbing. Further, although effectively mitigated if done correctly, the proton upset-ability of the Xilinx FPGAs is a concern in severe proton-rich environments. Ultimately, both manufacturers' upset hardening is limited by SEFI (single-event functional interrupt) rates where it appears the Actel results are better although the Xilinx Virtex2-family result of about one SEFI in 65 device-years in solar-min GCR (the more intense part of the galactic cosmic-ray background) should be acceptable to most missions

  14. Design and evaluation of online arithmetic for signal processing applications on FPGAs

    NASA Astrophysics Data System (ADS)

    Galli, Reto; Tenca, Alexandre F.

    2001-11-01

    This paper shows the design and the evaluation of on-line arithmetic modules for the most common operators used in DSP applications, using FPGAs as the target technology. The designs are highly optimized for the target technology and the common range of precision in DSP. The results are based on experimental data collected using CAD tools. All designs are synthesized for the same type of devices (Xilinx XC4000) for comparison, avoiding rough estimates of the system performance, and generating a more reliable and detailed comparison of on-line signal processing solutions with other state of the art approaches, such as distributed arithmetic. We show that on-line designs have a hard stand for basic DSP applications that use only addition and multiplication. However, we also show that on-line designs are able to overtake other approaches as the applications become more sophisticated, e.g. when data dependencies exist, or when non constant multiplicands restrict the use of other approaches.

  15. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  16. RPython high-level synthesis

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Linczuk, Maciej

    2016-09-01

    The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.

  17. Custom instruction set NIOS-based OFDM processor for FPGAs

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Sunkara, Divya; Castillo, Encarnacion; Garcia, Antonio

    2006-05-01

    Orthogonal Frequency division multiplexing (OFDM) spread spectrum technique, sometimes also called multi-carrier or discrete multi-tone modulation, are used in bandwidth-efficient communication systems in the presence of channel distortion. The benefits of OFDM are high spectral efficiency, resiliency to RF interference, and lower multi-path distortion. OFDM is the basis for the European digital audio broadcasting (DAB) standard, the global asymmetric digital subscriber line (ADSL) standard, in the IEEE 802.11 5.8 GHz band standard, and ongoing development in wireless local area networks. The modulator and demodulator in an OFDM system can be implemented by use of a parallel bank of filters based on the discrete Fourier transform (DFT), in case the number of subchannels is large (e.g. K > 25), the OFDM system are efficiently implemented by use of the fast Fourier transform (FFT) to compute the DFT. We have developed a custom FPGA-based Altera NIOS system to increase the performance, programmability, and low power in mobil wireless systems. The overall gain observed for a 1024-point FFT ranges depending on the multiplier used by the NIOS processor between a factor of 3 and 16. A careful optimization described in the appendix yield a performance gain of up to 77% when compared with our preliminary results.

  18. A preliminary study of molecular dynamics on reconfigurable computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolinski, C.; Trouw, F. R.; Gokhale, M.

    2003-01-01

    In this paper we investigate the performance of platform FPGAs on a compute-intensive, floating-point-intensive supercomputing application, Molecular Dynamics (MD). MD is a popular simulation technique to track interacting particles through time by integrating their equations of motion. One part of the MD algorithm was implemented using the Fabric Generator (FG)[l I ] and mapped onto several reconfigurable logic arrays. FG is a Java-based toolset that greatly accelerates construction of the fabrics from an abstract technology independent representation. Our experiments used technology-independent IEEE 32-bit floating point operators so that the design could be easily re-targeted. Experiments were performed using both non-pipelinedmore » and pipelined floating point modules. We present results for the Altera Excalibur ARM System on a Programmable Chip (SoPC), the Altera Strath EPlS80, and the Xilinx Virtex-N Pro 2VP.50. The best results obtained were 5.69 GFlops at 8OMHz(Altera Strath EPlS80), and 4.47 GFlops at 82 MHz (Xilinx Virtex-II Pro 2VF50). Assuming a lOWpower budget, these results compare very favorably to a 4Gjlop/40Wprocessing/power rate for a modern Pentium, suggesting that reconfigurable logic can achieve high performance at low power on jloating-point-intensivea pplications.« less

  19. GateKeeper: a new hardware architecture for accelerating pre-alignment in DNA short read mapping.

    PubMed

    Alser, Mohammed; Hassan, Hasan; Xin, Hongyi; Ergin, Oguz; Mutlu, Onur; Alkan, Can

    2017-11-01

    High throughput DNA sequencing (HTS) technologies generate an excessive number of small DNA segments -called short reads- that cause significant computational burden. To analyze the entire genome, each of the billions of short reads must be mapped to a reference genome based on the similarity between a read and 'candidate' locations in that reference genome. The similarity measurement, called alignment, formulated as an approximate string matching problem, is the computational bottleneck because: (i) it is implemented using quadratic-time dynamic programming algorithms and (ii) the majority of candidate locations in the reference genome do not align with a given read due to high dissimilarity. Calculating the alignment of such incorrect candidate locations consumes an overwhelming majority of a modern read mapper's execution time. Therefore, it is crucial to develop a fast and effective filter that can detect incorrect candidate locations and eliminate them before invoking computationally costly alignment algorithms. We propose GateKeeper, a new hardware accelerator that functions as a pre-alignment step that quickly filters out most incorrect candidate locations. GateKeeper is the first design to accelerate pre-alignment using Field-Programmable Gate Arrays (FPGAs), which can perform pre-alignment much faster than software. When implemented on a single FPGA chip, GateKeeper maintains high accuracy (on average >96%) while providing, on average, 90-fold and 130-fold speedup over the state-of-the-art software pre-alignment techniques, Adjacency Filter and Shifted Hamming Distance (SHD), respectively. The addition of GateKeeper as a pre-alignment step can reduce the verification time of the mrFAST mapper by a factor of 10. https://github.com/BilkentCompGen/GateKeeper. mohammedalser@bilkent.edu.tr or onur.mutlu@inf.ethz.ch or calkan@cs.bilkent.edu.tr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. A hybrid short read mapping accelerator

    PubMed Central

    2013-01-01

    Background The rapid growth of short read datasets poses a new challenge to the short read mapping problem in terms of sensitivity and execution speed. Existing methods often use a restrictive error model for computing the alignments to improve speed, whereas more flexible error models are generally too slow for large-scale applications. A number of short read mapping software tools have been proposed. However, designs based on hardware are relatively rare. Field programmable gate arrays (FPGAs) have been successfully used in a number of specific application areas, such as the DSP and communications domains due to their outstanding parallel data processing capabilities, making them a competitive platform to solve problems that are “inherently parallel”. Results We present a hybrid system for short read mapping utilizing both FPGA-based hardware and CPU-based software. The computation intensive alignment and the seed generation operations are mapped onto an FPGA. We present a computationally efficient, parallel block-wise alignment structure (Align Core) to approximate the conventional dynamic programming algorithm. The performance is compared to the multi-threaded CPU-based GASSST and BWA software implementations. For single-end alignment, our hybrid system achieves faster processing speed than GASSST (with a similar sensitivity) and BWA (with a higher sensitivity); for pair-end alignment, our design achieves a slightly worse sensitivity than that of BWA but has a higher processing speed. Conclusions This paper shows that our hybrid system can effectively accelerate the mapping of short reads to a reference genome based on the seed-and-extend approach. The performance comparison to the GASSST and BWA software implementations under different conditions shows that our hybrid design achieves a high degree of sensitivity and requires less overall execution time with only modest FPGA resource utilization. Our hybrid system design also shows that the performance bottleneck for the short read mapping problem can be changed from the alignment stage to the seed generation stage, which provides an additional requirement for the future development of short read aligners. PMID:23441908

  1. Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.

    2010-01-01

    The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for the Federal fiscal year of 2010 are: Silicon-Germanium (SiGe) Integrated Electronics for Extreme Environments, Modeling of Radiation Effects on Electronics, Radiation Hardened High Performance Processors (HPP), and and Reconfigurable Computing.

  2. Single Event Effects in FPGA Devices 2015-2016

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth; Pellish, Jonathan

    2016-01-01

    This presentation provides an overview of single event effects in FPGA devices 2015-2016 including commercial Xilinx V5 heavy ion accelerated testing, Xilinx Kintex-7 heavy ion accelerated testing. Mitigation study, and investigation of various types of triple modular redundancy (TMR) for commercial SRAM based FPGAs.

  3. Single Event Effects in FPGA Devices 2014-2015

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; LaBel, Kenneth A.; Pellish, Jonathan

    2015-01-01

    This presentation provides an overview of single event effects in FPGA devices 2014-2015 including commercial Xilinx V5 heavy ion accelerated testing, Xilinx Kintex-7 heavy ion accelerated testing. Mitigation study, and investigation of various types of triple modular redundancy (TMR) for commercial SRAM based FPGAs.

  4. Single Event Effects in FPGA Devices 2015-2016

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth; Pellish, Jonathan

    2016-01-01

    This presentation provides an overview of single event effects in FPGA devices 2015-2016 including commercial Xilinx V5 heavy ion accelerated testing, Xilinx Kintex-7 heavy ion accelerated testing, mitigation study, and investigation of various types of triple modular redundancy (TMR) for commercial SRAM based FPGAs.

  5. Field programmable gate arrays: Evaluation report for space-flight application

    NASA Technical Reports Server (NTRS)

    Sandoe, Mike; Davarpanah, Mike; Soliman, Kamal; Suszko, Steven; Mackey, Susan

    1992-01-01

    Field Programmable Gate Arrays commonly called FPGA's are the newer generation of field programmable devices and offer more flexibility in the logic modules they incorporate and in how they are interconnected. The flexibility, the number of logic building blocks available, and the high gate densities achievable are why users find FPGA's attractive. These attributes are important in reducing product development costs and shortening the development cycle. The aerospace community is interested in incorporating this new generation of field programmable technology in space applications. To this end, a consortium was formed to evaluate the quality, reliability, and radiation performance of FPGA's. This report presents the test results on FPGA parts provided by ACTEL Corporation.

  6. A Programmable and Configurable Mixed-Mode FPAA SoC

    DTIC Science & Technology

    2016-03-17

    A Programmable and Configurable Mixed-Mode FPAA SoC Sahil Shah, Sihwan Kim, Farhan Adil, Jennifer Hasler, Suma George, Michelle Collins, Richard...Abstract: The authors present a Floating-Gate based, System-On-Chip large-scale Field- Programmable Analog Array IC that integrates divergent concepts...Floating-Gate, SoC, Command Word Classification This paper presents a Floating-Gate (FG) based, System- On-Chip (SoC) large-scale Field- Programmable

  7. The Effects of Race Conditions when Implementing Single-Source Redundant Clock Trees in Triple Modular Redundant Synchronous Architectures

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth A.; Pellish, Jonathan

    2016-01-01

    We present the challenges that arise when using redundant clock domains due to their clock-skew. Heavy-ion radiation data show that a singular clock domain (DTMR) provides an improved TMR methodology for SRAM-based FPGAs over redundant clocks.

  8. Lessons learnt from a three-year pilot field epidemiology training programme.

    PubMed

    Hoy, Damian; Durand, A Mark; Hancock, Thane; Cash, Haley L; Hardie, Kate; Paterson, Beverley; Paulino, Yvette; White, Paul; Merritt, Tony; Fitzgibbons, Dawn; Gopalani, Sameer Vali; Flint, James; Edwin A Merilles, Onofre; Kashiwabara, Mina; Biaukula, Viema; Lepers, Christelle; Souares, Yvan; Nilles, Eric; Batikawai, Anaseini; Huseynova, Sevil; Patel, Mahomed; Saketa, Salanieta T; Durrheim, David; Henderson, Alden; Roth, Adam

    2017-01-01

    The Pacific region has widely dispersed populations, limited financial and human resources and a high burden of disease. There is an urgent need to improve the availability, reliability and timeliness of useable health data. The purpose of this paper is to share lessons learnt from a three-year pilot field epidemiology training programme that was designed to respond to these Pacific health challenges. The pilot programme built on and further developed an existing field epidemiology training programme for Pacific health staff. The programme was delivered in country by epidemiologists working for Pacific Public Health Surveillance Network partners. The programme consisted of five courses: four one-week classroom-based courses and one field epidemiology project. Sessions were structured so that theoretical understanding was achieved through interaction and reinforced through practical hands-on group activities, case studies and other interactive practical learning methods. As of September 2016, 258 students had commenced the programme. Twenty-six course workshops were delivered and one cohort of students had completed the full five-course programme. The programme proved popular and gained a high level of student engagement. Face-to-face delivery, a low student-to-facilitator ratio, substantial group work and practical exercises were identified as key factors that contributed to the students developing skills and confidence. Close engagement of leaders and the need to quickly evaluate and adapt the curriculum were important lessons, and the collaboration between external partners was considered important for promoting a harmonized approach to health needs in the Pacific.

  9. Detection of Visual Field Loss in Pituitary Disease: Peripheral Kinetic Versus Central Static

    PubMed Central

    Rowe, Fiona J.; Cheyne, Christopher P.; García-Fiñana, Marta; Noonan, Carmel P.; Howard, Claire; Smith, Jayne; Adeoye, Joanne

    2015-01-01

    Abstract Visual field assessment is an important clinical evaluation for eye disease and neurological injury. We evaluated Octopus semi-automated kinetic peripheral perimetry (SKP) and Humphrey static automated central perimetry for detection of neurological visual field loss in patients with pituitary disease. We carried out a prospective cross-sectional diagnostic accuracy study comparing Humphrey central 30-2 SITA threshold programme with a screening protocol for SKP on Octopus perimetry. Humphrey 24-2 data were extracted from 30-2 results. Results were independently graded for presence/absence of field defect plus severity of defect. Fifty patients (100 eyes) were recruited (25 males and 25 females), with mean age of 52.4 years (SD = 15.7). Order of perimeter assessment (Humphrey/Octopus first) and order of eye tested (right/left first) were randomised. The 30-2 programme detected visual field loss in 85%, the 24-2 programme in 80%, and the Octopus combined kinetic/static strategy in 100% of eyes. Peripheral visual field loss was missed by central threshold assessment. Qualitative comparison of type of visual field defect demonstrated a match between Humphrey and Octopus results in 58%, with a match for severity of defect in 50%. Tests duration was 9.34 minutes (SD = 2.02) for Humphrey 30-2 versus 10.79 minutes (SD = 4.06) for Octopus perimetry. Octopus semi-automated kinetic perimetry was found to be superior to central static testing for detection of pituitary disease-related visual field loss. Where reliant on Humphrey central static perimetry, the 30-2 programme is recommended over the 24-2 programme. Where kinetic perimetry is available, this is preferable to central static programmes for increased detection of peripheral visual field loss. PMID:27928344

  10. A neuro-inspired spike-based PID motor controller for multi-motor robots with low cost FPGAs.

    PubMed

    Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J; Paz-Vicente, Rafael; Civit-Balcells, Anton

    2012-01-01

    In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control.

  11. A Neuro-Inspired Spike-Based PID Motor Controller for Multi-Motor Robots with Low Cost FPGAs

    PubMed Central

    Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J.; Paz-Vicente, Rafael; Civit-Balcells, Anton

    2012-01-01

    In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control. PMID:22666004

  12. Impact of Magnetic Field on Pressures of Programmable Cerebrospinal Fluid Shunts: An Experimental Study.

    PubMed

    Altun, Idiris; Yuksel, Kasim Zafer; Mert, Tufan

    2017-01-01

    To investigate whether programmable cerebrospinal fluid (CSF) shunts are influenced by exposure to the magnetic field and to compare the effects of magnetic field in 4 different brands of programmable CSF shunts. This experimental study was performed in the laboratory using a novel design of magnetic field. Four types of programmable CSF shunts (Miethke®, Medtronic®, Sophysa® and Codman®Hakim®) were exposed to the magnetic field generated by an apparatus consisting of Helmholtz coil for 5 minutes. In every CSF shunt, initial pressures were adjusted to 110 mm H2O and pressures after exposure to magnetic field were noted. These measurements were implemented at frequencies of 5 Hz, 20 Hz, 30 Hz, 40 Hz, 60 Hz and 80 Hz. In each type, three shunts were utilized and evaluations were made twice for every shunt. At 5, 30, 40 and 60 Hz, Groups 1, 2 and 3 had significantly higher average pressures than Group 4. At 20 and 80 Hz, Groups 1 and 2 had notably different pressure values than Groups 3 and 4. Group 3 displayed the highest pressure, while Group 4 demonstrated the lowest pressure. Exposure to magnetic fields may affect the pressures of programmable CSF shunts. However, further controlled, clinical trials are warranted to elucidate the in-vivo effects of magnetic field exposure.

  13. Lessons learnt from a three-year pilot field epidemiology training programme

    PubMed Central

    Durand, A Mark; Hancock, Thane; Cash, Haley L; Hardie, Kate; Paterson, Beverley; Paulino, Yvette; White, Paul; Merritt, Tony; Fitzgibbons, Dawn; Gopalani, Sameer Vali; Flint, James; Edwin A Merilles, Onofre; Kashiwabara, Mina; Biaukula, Viema; Lepers, Christelle; Souares, Yvan; Nilles, Eric; Batikawai, Anaseini; Huseynova, Sevil; Patel, Mahomed; Saketa, Salanieta T; Durrheim, David; Henderson, Alden; Roth, Adam

    2017-01-01

    Problem The Pacific region has widely dispersed populations, limited financial and human resources and a high burden of disease. There is an urgent need to improve the availability, reliability and timeliness of useable health data. Context The purpose of this paper is to share lessons learnt from a three-year pilot field epidemiology training programme that was designed to respond to these Pacific health challenges. The pilot programme built on and further developed an existing field epidemiology training programme for Pacific health staff. Action The programme was delivered in country by epidemiologists working for Pacific Public Health Surveillance Network partners. The programme consisted of five courses: four one-week classroom-based courses and one field epidemiology project. Sessions were structured so that theoretical understanding was achieved through interaction and reinforced through practical hands-on group activities, case studies and other interactive practical learning methods. Outcome As of September 2016, 258 students had commenced the programme. Twenty-six course workshops were delivered and one cohort of students had completed the full five-course programme. The programme proved popular and gained a high level of student engagement. Discussion Face-to-face delivery, a low student-to-facilitator ratio, substantial group work and practical exercises were identified as key factors that contributed to the students developing skills and confidence. Close engagement of leaders and the need to quickly evaluate and adapt the curriculum were important lessons, and the collaboration between external partners was considered important for promoting a harmonized approach to health needs in the Pacific. PMID:29051838

  14. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  15. Development of a 64 channel ultrasonic high frequency linear array imaging system.

    PubMed

    Hu, ChangHong; Zhang, Lequan; Cannata, Jonathan M; Yen, Jesse; Shung, K Kirk

    2011-12-01

    In order to improve the lateral resolution and extend the field of view of a previously reported 48 element 30 MHz ultrasound linear array and 16-channel digital imaging system, the development of a 256 element 30 MHz linear array and an ultrasound imaging system with increased channel count has been undertaken. This paper reports the design and testing of a 64 channel digital imaging system which consists of an analog front-end pulser/receiver, 64 channels of Time-Gain Compensation (TGC), 64 channels of high-speed digitizer as well as a beamformer. A Personal Computer (PC) is used as the user interface to display real-time images. This system is designed as a platform for the purpose of testing the performance of high frequency linear arrays that have been developed in house. Therefore conventional approaches were taken it its implementation. Flexibility and ease of use are of primary concern whereas consideration of cost-effectiveness and novelty in design are only secondary. Even so, there are many issues at higher frequencies but do not exist at lower frequencies need to be solved. The system provides 64 channels of excitation pulsers while receiving simultaneously at a 20-120 MHz sampling rate to 12-bits. The digitized data from all channels are first fed through Field Programmable Gate Arrays (FPGAs), and then stored in memories. These raw data are accessed by the beamforming processor to re-build the image or to be downloaded to the PC for further processing. The beamformer that applies delays to the echoes of each channel is implemented with the strategy that combines coarse (8.3 ns) and fine delays (2 ns). The coarse delays are integer multiples of the sampling clock rate and are achieved by controlling the write enable pin of the First-In-First-Out (FIFO) memory to obtain valid beamforming data. The fine delays are accomplished with interpolation filters. This system is capable of achieving a maximum frame rate of 50 frames per second. Wire phantom images acquired with this system show a spatial resolution of 146 μm (lateral) and 54 μm (axial). Images with excised rabbit and pig eyeball as well as mouse embryo were also acquired to demonstrate its imaging capability. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Development of a 64 channel ultrasonic high frequency linear array imaging system

    PubMed Central

    Hu, ChangHong; Zhang, Lequan; Cannata, Jonathan M.; Yen, Jesse; Shung, K. Kirk

    2011-01-01

    In order to improve the lateral resolution and extend the field of view of a previously reported 48 element 30 MHz ultrasound linear array and 16-channel digital imaging system, the development of a 256 element 30 MHz linear array and an ultrasound imaging system with increased channel count has been undertaken. This paper reports the design and testing of a 64 channel digital imaging system which consists of an analog front-end pulser/receiver, 64 channels of Time-Gain Compensation (TGC), 64 channels of high-speed digitizer as well as a beamformer. A Personal Computer (PC) is used as the user interface to display real-time images. This system is designed as a platform for the purpose of testing the performance of high frequency linear arrays that have been developed in house. Therefore conventional approaches were taken it its implementation. Flexibility and ease of use are of primary concern whereas consideration of cost-effectiveness and novelty in design are only secondary. Even so, there are many issues at higher frequencies but do not exist at lower frequencies need to be solved. The system provides 64 channels of excitation pulsers while receiving simultaneously at a 20 MHz–120 MHz sampling rate to 12-bits. The digitized data from all channels are first fed through Field Programmable Gate Arrays (FPGAs), and then stored in memories. These raw data are accessed by the beamforming processor to re-build the image or to be downloaded to the PC for further processing. The beamformer that applies delays to the echoes of each channel is implemented with the strategy that combines coarse (8.3ns) and fine delays (2 ns). The coarse delays are integer multiples of the sampling clock rate and are achieved by controlling the write enable pin of the First-In-First-Out (FIFO) memory to obtain valid beamforming data. The fine delays are accomplished with interpolation filters. This system is capable of achieving a maximum frame rate of 50 frames per second. Wire phantom images acquired with this system show a spatial resolution of 146 μm (lateral) and 54 μm (axial). Images with excised rabbit and pig eyeball as well as mouse embryo were also acquired to demonstrate its imaging capability. PMID:21684568

  17. International Field Experience--What Do Student Teachers Learn?

    ERIC Educational Resources Information Center

    Lee, Jackie Fung King

    2011-01-01

    This inquiry aimed to examine the benefits of having international field experience for a group of Hong Kong postgraduate student teachers who joined a six-week immersion programme in New Zealand. Through participants' reflections, interviews and programme evaluations, the present investigation found that the overseas field experience not only…

  18. The new front-end electronics for the ATLAS Tile Calorimeter Phase 2 Upgrade

    NASA Astrophysics Data System (ADS)

    Gomes, A.

    2016-02-01

    We present the plans, design, and performance results to date for the new front-end electronics being developed for the Phase 2 Upgrade of the ATLAS Tile Calorimeter. The front-end electronics will be replaced to address the increased luminosity at the HL-LHC around 2025, as well as to upgrade to faster, more modern components with higher radiation tolerance. The new electronics will operate dead-timelessly, pushing full data sets from each beam crossing to the data acquisition system that resides off-detector. The new on-detector electronics contains five main parts: the front-end boards that connect directly to the photomultiplier tubes; the Main Boards that digitize the data; the Daughter Boards that collect the data streams and contain the high speed optical communication links for writing data to the data acquisition system; a programmable high voltage control system; and a new low voltage power supply. There are different options for implementing these subcomponents, which will be described. The new system contains new features that in the current version include power system redundancy, data collection redundancy, data transmission redundancy with 2 QSFP optical transceivers and Kintex-7 FPGAs with firmware enhanced scheme for single event upset mitigation. To date, we have built a Demonstrator—a fully functional prototype of the new system. Performance results and plans are presented.

  19. Analog Module Architecture for Space-Qualified Field-Programmable Mixed-Signal Arrays

    NASA Technical Reports Server (NTRS)

    Edwards, R. Timothy; Strohbehn, Kim; Jaskulek, Steven E.; Katz, Richard

    1999-01-01

    Spacecraft require all manner of both digital and analog circuits. Onboard digital systems are constructed almost exclusively from field-programmable gate array (FPGA) circuits providing numerous advantages over discrete design including high integration density, high reliability, fast turn-around design cycle time, lower mass, volume, and power consumption, and lower parts acquisition and flight qualification costs. Analog and mixed-signal circuits perform tasks ranging from housekeeping to signal conditioning and processing. These circuits are painstakingly designed and built using discrete components due to a lack of options for field-programmability. FPAA (Field-Programmable Analog Array) and FPMA (Field-Programmable Mixed-signal Array) parts exist but not in radiation-tolerant technology and not necessarily in an architecture optimal for the design of analog circuits for spaceflight applications. This paper outlines an architecture proposed for an FPAA fabricated in an existing commercial digital CMOS process used to make radiation-tolerant antifuse-based FPGA devices. The primary concerns are the impact of the technology and the overall array architecture on the flexibility of programming, the bandwidth available for high-speed analog circuits, and the accuracy of the components for high-performance applications.

  20. Wire like link for cycle reproducible and cycle accurate hardware accelerator

    DOEpatents

    Asaad, Sameh; Kapur, Mohit; Parker, Benjamin D

    2015-04-07

    First and second field programmable gate arrays are provided which implement first and second blocks of a circuit design to be simulated. The field programmable gate arrays are operated at a first clock frequency and a wire like link is provided to send a plurality of signals between them. The wire like link includes a serializer, on the first field programmable gate array, to serialize the plurality of signals; a deserializer on the second field programmable gate array, to deserialize the plurality of signals; and a connection between the serializer and the deserializer. The serializer and the deserializer are operated at a second clock frequency, greater than the first clock frequency, and the second clock frequency is selected such that latency of transmission and reception of the plurality of signals is less than the period corresponding to the first clock frequency.

  1. Field programmable chemistry: integrated chemical and electronic processing of informational molecules towards electronic chemical cells.

    PubMed

    Wagler, Patrick F; Tangen, Uwe; Maeke, Thomas; McCaskill, John S

    2012-07-01

    The topic addressed is that of combining self-constructing chemical systems with electronic computation to form unconventional embedded computation systems performing complex nano-scale chemical tasks autonomously. The hybrid route to complex programmable chemistry, and ultimately to artificial cells based on novel chemistry, requires a solution of the two-way massively parallel coupling problem between digital electronics and chemical systems. We present a chemical microprocessor technology and show how it can provide a generic programmable platform for complex molecular processing tasks in Field Programmable Chemistry, including steps towards the grand challenge of constructing the first electronic chemical cells. Field programmable chemistry employs a massively parallel field of electrodes, under the control of latched voltages, which are used to modulate chemical activity. We implement such a field programmable chemistry which links to chemistry in rather generic, two-phase microfluidic channel networks that are separated into weakly coupled domains. Electric fields, produced by the high-density array of electrodes embedded in the channel floors, are used to control the transport of chemicals across the hydrodynamic barriers separating domains. In the absence of electric fields, separate microfluidic domains are essentially independent with only slow diffusional interchange of chemicals. Electronic chemical cells, based on chemical microprocessors, exploit a spatially resolved sandwich structure in which the electronic and chemical systems are locally coupled through homogeneous fine-grained actuation and sensor networks and play symmetric and complementary roles. We describe how these systems are fabricated, experimentally test their basic functionality, simulate their potential (e.g. for feed forward digital electrophoretic (FFDE) separation) and outline the application to building electronic chemical cells. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. EHWPACK: An evolvable hardware environment using the SPICE simulator and the Field Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Keymeulen, D.; Klimeck, G.; Zebulum, R.; Stoica, A.; Jin, Y.; Lazaro, C.

    2000-01-01

    This paper describes the EHW development system, a tool that performs the evolutionary synthesis of electronic circuits, using the SPICE simulator and the Field Programmable Transistor Array hardware (FPTA) developed at JPL.

  3. Professional Field in the Accreditation Process: Examining Information Technology Programmes at Dutch Universities of Applied Sciences

    ERIC Educational Resources Information Center

    Frederik, Hans; Hasanefendic, Sandra; van der Sijde, Peter

    2017-01-01

    In this paper, we analyse 53 Dutch accreditation reports in the field of information technology to assess the mechanisms of the reported involvement of the professional field in the undergraduate programmes of universities of applied sciences. The results of qualitative content analysis reveal a coupling effect in reporting on mechanisms of…

  4. Promoting Field Trip Confidence: Teachers Providing Insights for Pre-Service Education

    ERIC Educational Resources Information Center

    Ateskan, Armagan; Lane, Jennie F.

    2016-01-01

    Pre-service teachers need experiences in practical matters as a part of field trip preparations programmes. For 14 years, a private, non-profit university in Turkey has involved pre-service teachers in field trip planning, implementation and evaluation. A programme assessment was conducted through a case study to examine the long-term effects of…

  5. Automated Design of Board and MCM Level Digital Systems.

    DTIC Science & Technology

    1997-10-01

    Partitioning for Multicomponent Synthesis 159 Appendix K: Resource Constrained RTL Partitioning for Synthesis of Multi- FPGA Designs 169 Appendix L...digital signal processing) ar- chitectures. These target architectures, illustrated in Figure 1, can contain application-specific ASICS, FPGAs ...synthesis tools for ASIC, FPGA and MCM synthesis (Figure 8). Multicomponent Partitioning Engine The par- titioning engine is a hierarchical partitioning

  6. Scalable System Design for Covert MIMO Communications

    DTIC Science & Technology

    2014-06-01

    Sample based resolution of the QRD and equalization processes in the MIMO receiver, for NQR = 11...55 5.1 NQR calculation parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.2 Resources available on Xilinx Virtex-7 FPGAs...carried out for Na ∈ [2 3 4]. Extrapolation is used to determine trends as a function of the number of QRD blocks instantiated NQR and Na. This section

  7. Measuring interdisciplinary research and education outcomes in the Vienna Doctoral Programme on Water Resource Systems

    NASA Astrophysics Data System (ADS)

    Carr, Gemma; Loucks, Daniel Pete; Blaschke, Alfred Paul; Bucher, Christian; Farnleitner, Andreas; Fürnkranz-Prskawetz, Alexia; Parajka, Juraj; Pfeifer, Norbert; Rechberger, Helmut; Wagner, Wolfgang; Zessner, Matthias; Blöschl, Günter

    2015-04-01

    The interdisciplinary postgraduate research and education programme - the Vienna Doctoral Programme on Water Resource Systems - was initiated in 2009. To date, 35 research students, three post-docs and ten faculty members have been engaged in the Programme, from ten research fields (aquatic microbiology, hydrology, hydro-climatology, hydro-geology, mathematical economics, photogrammetry, remote sensing, resource management, structural mechanics, and water quality). The Programme aims to develop research students with the capacity to work across the disciplines, to conduct cutting edge research and foster an international perspective. To do this, a variety of mechanisms are adopted that include research cluster groups, joint study sites, joint supervision, a basic study programme and a research semester abroad. The Programme offers a unique case study to explore if and how these mechanisms lead to research and education outcomes. Outcomes are grouped according to whether they are tangible (publications with co-authors from more than one research field, analysis of graduate profiles and career destinations) or non-tangible (interaction between researchers, networks and trust). A mixed methods approach that includes bibliometric analysis combined with interviews with students is applied. Bibliometric analysis shows that as the Programme has evolved the amount of multi-disciplinary work has increased (32% of the 203 full papers produced by the programme's researchers have authors from more than one research field). Network analysis to explore which research fields collaborate most frequently show that hydrology plays a significant role and has collaborated with seven of the ten research fields. Hydrology researchers seem to interact the most strongly with other research fields as they contribute understanding on water system processes. Network analysis to explore which individuals collaborate shows that much joint work takes place through the five research cluster groups (water resource management, land-surface processes, Hydrological Open Air Laboratory, water and health, modelling and risk). Student interviews highlight that trust between colleagues and supervisors, and the role of spaces for interaction (joint study sites, cluster group meetings, shared offices etc.) are important for joint work. Graduate analysis shows that students develop skills and confidence to work across disciplines through collaborating on their doctoral research. Working collaboratively during the doctorate appears to be strongly correlated with continuing to work in this way after graduation.

  8. Adaptation, Evaluation and Inclusion

    ERIC Educational Resources Information Center

    Basson, R.

    2011-01-01

    In this article I reflect on a recent development currently shaping programme evaluation as field, which makes the case for evaluators facilitating evaluation training evaluees to self-evaluate and improve the programmes they teach. Fetterman argues persuasively that the practice was incipient in the field and required formalization and acceptance…

  9. Convolutional Neural Network on Embedded Linux(trademark) System-on-Chip: A Methodology and Performance Benchmark

    DTIC Science & Technology

    2016-05-01

    A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1

  10. Convolutional Neural Network on Embedded Linux System-on-Chip: A Methodology and Performance Benchmark

    DTIC Science & Technology

    2016-05-01

    A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1

  11. Environmental Learning Using a Problem-Based Approach in the Field: A Case Study of a Hong Kong School

    ERIC Educational Resources Information Center

    Kwan, Tammy; So, Max

    2008-01-01

    This study investigated the environmental learning of a group of senior geography students through a problem-based learning (PBL) field programme to see if the goals of education "for" the environment could be accomplished. In the PBL field programme, the students were given a problem statement concerning a real-life scenario of an old…

  12. High-Throughput, Adaptive FFT Architecture for FPGA-Based Spaceborne Data Processors

    NASA Technical Reports Server (NTRS)

    NguyenKobayashi, Kayla; Zheng, Jason X.; He, Yutao; Shah, Biren N.

    2011-01-01

    Exponential growth in microelectronics technology such as field-programmable gate arrays (FPGAs) has enabled high-performance spaceborne instruments with increasing onboard data processing capabilities. As a commonly used digital signal processing (DSP) building block, fast Fourier transform (FFT) has been of great interest in onboard data processing applications, which needs to strike a reasonable balance between high-performance (throughput, block size, etc.) and low resource usage (power, silicon footprint, etc.). It is also desirable to be designed so that a single design can be reused and adapted into instruments with different requirements. The Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture was developed, in which the high-throughput benefits of the parallel FFT structure and the low resource usage of Singleton s single butterfly method is exploited. The result is a wide-kernel, multipass, adaptive FFT architecture. The 32K-point MPWK-FFT architecture includes 32 radix-2 butterflies, 64 FIFOs to store the real inputs, 64 FIFOs to store the imaginary inputs, complex twiddle factor storage, and FIFO logic to route the outputs to the correct FIFO. The inputs are stored in sequential fashion into the FIFOs, and the outputs of each butterfly are sequentially written first into the even FIFO, then the odd FIFO. Because of the order of the outputs written into the FIFOs, the depth of the even FIFOs, which are 768 each, are 1.5 times larger than the odd FIFOs, which are 512 each. The total memory needed for data storage, assuming that each sample is 36 bits, is 2.95 Mbits. The twiddle factors are stored in internal ROM inside the FPGA for fast access time. The total memory size to store the twiddle factors is 589.9Kbits. This FFT structure combines the benefits of high throughput from the parallel FFT kernels and low resource usage from the multi-pass FFT kernels with desired adaptability. Space instrument missions that need onboard FFT capabilities such as the proposed DESDynl, SWOT (Surface Water Ocean Topography), and Europa sounding radar missions would greatly benefit from this technology with significant reductions in non-recurring cost and risk.

  13. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.

  14. Report of the Director-General on the Long-Term Programme in the Field of Hydrology.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Paris (France). General Conference.

    The report describes the principal orientations of the International Hydrological Programme, as well as the procedures suggested for its execution. The origin and justification of the programme are presented. The objectives of the 1975 programme are stated and the contents, which include the activities, themes, application of new techniques in…

  15. Enhancing Learning Effectiveness in Digital Design Courses through the Use of Programmable Logic Boards

    ERIC Educational Resources Information Center

    Zhu, Yi; Weng, T.; Cheng, Chung-Kuan

    2009-01-01

    Incorporating programmable logic devices (PLD) in digital design courses has become increasingly popular. The advantages of using PLDs, such as complex programmable logic devices (CPLDs) and field programmable gate arrays (FPGA), have been discussed before. However, previous studies have focused on the experiences from the point of view of the…

  16. PHANTOM: Practical Oblivious Computation in a Secure Processor

    DTIC Science & Technology

    2014-05-16

    Utilizing Multiple FPGAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6 Implementation on the HC-2ex 50 6.1 Integration with a RISC -V...development of Phantom, Mohit also contributed to the code base, in particular with regard to the integration between the ORAM controller and the RISC -V...well. v Tremendous thanks is owed to the team that developed the RISC -V processor Phantom is using: among other contributors, this includes

  17. Event processing in X-IFU detector onboard Athena.

    NASA Astrophysics Data System (ADS)

    Ceballos, M. T.; Cobos, B.; van der Kuurs, J.; Fraga-Encinas, R.

    2015-05-01

    The X-ray Observatory ATHENA was proposed in April 2014 as the mission to implement the science theme "The Hot and Energetic Universe" selected by ESA for L2 (the second Large-class mission in ESA's Cosmic Vision science programme). One of the two X-ray detectors designed to be onboard ATHENA is X-IFU, a cryogenic microcalorimeter based on Transition Edge Sensor (TES) technology that will provide spatially resolved high-resolution spectroscopy. X-IFU will be developed by a consortium of European research institutions currently from France (leadership), Italy, The Netherlands, Belgium, UK, Germany and Spain. From Spain, IFCA (CSIC-UC) is involved in the Digital Readout Electronics (DRE) unit of the X-IFU detector, in particular in the Event Processor Subsytem. We at IFCA are in charge of the development and implementation in the DRE unit of the Event Processing algorithms, designed to recognize, from a noisy signal, the intensity pulses generated by the absorption of the X-ray photons, and lately extract their main parameters (coordinates, energy, arrival time, grade, etc.) Here we will present the design and performance of the algorithms developed for the event recognition (adjusted derivative), and pulse grading/qualification as well as the progress in the algorithms designed to extract the energy content of the pulses (pulse optimal filtering). IFCA will finally have the responsibility of the implementation on board in the (TBD) FPGAs or micro-processors of the DRE unit, where this Event Processing part will take place, to fit into the limited telemetry of the instrument.

  18. Digital Beamforming Scatterometer

    NASA Technical Reports Server (NTRS)

    Rincon, Rafael F.; Vega, Manuel; Kman, Luko; Buenfil, Manuel; Geist, Alessandro; Hillard, Larry; Racette, Paul

    2009-01-01

    This paper discusses scatterometer measurements collected with multi-mode Digital Beamforming Synthetic Aperture Radar (DBSAR) during the SMAP-VEX 2008 campaign. The 2008 SMAP Validation Experiment was conducted to address a number of specific questions related to the soil moisture retrieval algorithms. SMAP-VEX 2008 consisted on a series of aircraft-based.flights conducted on the Eastern Shore of Maryland and Delaware in the fall of 2008. Several other instruments participated in the campaign including the Passive Active L-Band System (PALS), the Marshall Airborne Polarimetric Imaging Radiometer (MAPIR), and the Global Positioning System Reflectometer (GPSR). This campaign was the first SMAP Validation Experiment. DBSAR is a multimode radar system developed at NASA/Goddard Space Flight Center that combines state-of-the-art radar technologies, on-board processing, and advances in signal processing techniques in order to enable new remote sensing capabilities applicable to Earth science and planetary applications [l]. The instrument can be configured to operate in scatterometer, Synthetic Aperture Radar (SAR), or altimeter mode. The system builds upon the L-band Imaging Scatterometer (LIS) developed as part of the RadSTAR program. The radar is a phased array system designed to fly on the NASA P3 aircraft. The instrument consists of a programmable waveform generator, eight transmit/receive (T/R) channels, a microstrip antenna, and a reconfigurable data acquisition and processor system. Each transmit channel incorporates a digital attenuator, and digital phase shifter that enables amplitude and phase modulation on transmit. The attenuators, phase shifters, and calibration switches are digitally controlled by the radar control card (RCC) on a pulse by pulse basis. The antenna is a corporate fed microstrip patch-array centered at 1.26 GHz with a 20 MHz bandwidth. Although only one feed is used with the present configuration, a provision was made for separate corporate feeds for vertical and horizontal polarization. System upgrades to dual polarization are currently under way. The DBSAR processor is a reconfigurable data acquisition and processor system capable of real-time, high-speed data processing. DBSAR uses an FPGA-based architecture to implement digitally down-conversion, in-phase and quadrature (I/Q) demodulation, and subsequent radar specific algorithms. The core of the processor board consists of an analog-to-digital (AID) section, three Altera Stratix field programmable gate arrays (FPGAs), an ARM microcontroller, several memory devices, and an Ethernet interface. The processor also interfaces with a navigation board consisting of a GPS and a MEMS gyro. The processor has been configured to operate in scatterometer, Synthetic Aperture Radar (SAR), and altimeter modes. All the modes are based on digital beamforming which is a digital process that generates the far-field beam patterns at various scan angles from voltages sampled in the antenna array. This technique allows steering the received beam and controlling its beam-width and side-lobe. Several beamforming techniques can be implemented each characterized by unique strengths and weaknesses, and each applicable to different measurement scenarios. In Scatterometer mode, the radar is capable to.generate a wide beam or scan a narrow beam on transmit, and to steer the received beam on processing while controlling its beamwidth and side-lobe level. Table I lists some important radar characteristics

  19. Expectations of Majlis Amanah Rakyat (MARA) Stakeholders on the Ulul Albab Curriculum at a MARA Junior Science College (MRSM)

    ERIC Educational Resources Information Center

    Manaf, Umi Kalthom Abdul; Alias, Nurul Fitriah; Azman, Ady Hameme Nor; Rahman, Fadzilah Abdul; Zulkifli, Hafizah

    2014-01-01

    Ulul Albab is an educational programme of integration between the existing programmes in MARA Junior Science College (MRSM) with the religious school programme including Tahfiz Al-Quran. MRSM Ulul Albab education programme is designed to produce professional experts, entrepreneurs and technocrats that are well versed in the field of religion-based…

  20. Transition to blended learning: experiences from the first year of our blended learning Bachelor of Nursing Studies programme.

    PubMed

    Sweeney, Mary-Rose; Kirwan, Anne; Kelly, Mary; Corbally, Melissa; O Neill, Sandra; Kirwan, Mary; Hourican, Susan; Matthews, Anne; Hussey, Pamela

    2016-10-01

    The School of Nursing at Dublin City University offered a new blended learning Bachelor of Nursing Studies programme in the academic year 2011. To document the experiences of the academic team making the transition from a face-to-face classroom-delivered programme to the new blended learning format. Academics who delivered the programme were asked to describe their experiences of developing the new programme via two focus groups. Five dominant themes were identified: Staff Readiness; Student Readiness; Programme Delivery and Student Engagement; Assessment of Module Learning Outcomes and Feedback; and Reflecting on the First Year and Thinking of the Future. Face-to-face tutorials were identified as very important to both academics and students. Reservations about whether migrating the programme to an online format encouraged students to engage in additional practices of plagiarism were expressed by some. Student ability/readiness to engage with technology-enhanced learning was an important determinant of their own success academically. In the field of nursing blended learning is a relatively new and emerging field which will require huge cultural shifts for staff and students alike.

  1. Walking the Talk: Towards a More Inclusive Field of Disability Studies

    ERIC Educational Resources Information Center

    Opini, Bathseba

    2016-01-01

    This paper is a conversation about growing an inclusive field of disability studies. The paper draws on data collected through an analysis of existing disability studies programmes in selected Canadian universities. The paper makes a case for including diverse perspectives, experiences, viewpoints, and voices in these programmes. In this work, I…

  2. Development of a Low-Cost and High-speed Single Event Effects Testers based on Reconfigurable Field Programmable Gate Arrays (FPGA)

    NASA Technical Reports Server (NTRS)

    Howard, J. W.; Kim, H.; Berg, M.; LaBel, K. A.; Stansberry, S.; Friendlich, M.; Irwin, T.

    2006-01-01

    A viewgraph presentation on the development of a low cost, high speed tester reconfigurable Field Programmable Gata Array (FPGA) is shown. The topics include: 1) Introduction; 2) Objectives; 3) Tester Descriptions; 4) Tester Validations and Demonstrations; 5) Future Work; and 6) Summary.

  3. Beyond Constructivism: The Progressive Research Programme into Learning Science

    ERIC Educational Resources Information Center

    Taber, Keith S.

    2006-01-01

    In this paper, it is suggested that while there are a variety of frames or perspectives that guide research into learning science, a pre-paradigmatic field need not be a "free-for-all". Lakatos suggested that academic research fields were characterised by research programmes (RP), which offered heuristic guidance to researchers, and which…

  4. A simple laser locking system based on a field-programmable gate array.

    PubMed

    Jørgensen, N B; Birkmose, D; Trelborg, K; Wacker, L; Winter, N; Hilliard, A J; Bason, M G; Arlt, J J

    2016-07-01

    Frequency stabilization of laser light is crucial in both scientific and industrial applications. Technological developments now allow analog laser stabilization systems to be replaced with digital electronics such as field-programmable gate arrays, which have recently been utilized to develop such locking systems. We have developed a frequency stabilization system based on a field-programmable gate array, with emphasis on hardware simplicity, which offers a user-friendly alternative to commercial and previous home-built solutions. Frequency modulation, lock-in detection, and a proportional-integral-derivative controller are programmed on the field-programmable gate array and only minimal additional components are required to frequency stabilize a laser. The locking system is administered from a host-computer which provides comprehensive, long-distance control through a versatile interface. Various measurements were performed to characterize the system. The linewidth of the locked laser was measured to be 0.7 ± 0.1 MHz with a settling time of 10 ms. The system can thus fully match laser systems currently in use for atom trapping and cooling applications.

  5. A simple laser locking system based on a field-programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jørgensen, N. B.; Birkmose, D.; Trelborg, K.

    Frequency stabilization of laser light is crucial in both scientific and industrial applications. Technological developments now allow analog laser stabilization systems to be replaced with digital electronics such as field-programmable gate arrays, which have recently been utilized to develop such locking systems. We have developed a frequency stabilization system based on a field-programmable gate array, with emphasis on hardware simplicity, which offers a user-friendly alternative to commercial and previous home-built solutions. Frequency modulation, lock-in detection, and a proportional-integral-derivative controller are programmed on the field-programmable gate array and only minimal additional components are required to frequency stabilize a laser. The lockingmore » system is administered from a host-computer which provides comprehensive, long-distance control through a versatile interface. Various measurements were performed to characterize the system. The linewidth of the locked laser was measured to be 0.7 ± 0.1 MHz with a settling time of 10 ms. The system can thus fully match laser systems currently in use for atom trapping and cooling applications.« less

  6. Rethinking programme evaluation in health professions education: beyond 'did it work?'.

    PubMed

    Haji, Faizal; Morin, Marie-Paule; Parker, Kathryn

    2013-04-01

    For nearly 40 years, outcome-based models have dominated programme evaluation in health professions education. However, there is increasing recognition that these models cannot address the complexities of the health professions context and studies employing alternative evaluation approaches that are appearing in the literature. A similar paradigm shift occurred over 50 years ago in the broader discipline of programme evaluation. Understanding the development of contemporary paradigms within this field provides important insights to support the evolution of programme evaluation in the health professions. In this discussion paper, we review the historical roots of programme evaluation as a discipline, demonstrating parallels with the dominant approach to evaluation in the health professions. In tracing the evolution of contemporary paradigms within this field, we demonstrate how their aim is not only to judge a programme's merit or worth, but also to generate information for curriculum designers seeking to adapt programmes to evolving contexts, and researchers seeking to generate knowledge to inform the work of others. From this evolution, we distil seven essential elements of educational programmes that should be evaluated to achieve the stated goals. Our formulation is not a prescriptive method for conducting programme evaluation; rather, we use these elements as a guide for the development of a holistic 'programme of evaluation' that involves multiple stakeholders, uses a combination of available models and methods, and occurs throughout the life of a programme. Thus, these elements provide a roadmap for the programme evaluation process, which allows evaluators to move beyond asking whether a programme worked, to establishing how it worked, why it worked and what else happened. By engaging in this process, evaluators will generate a sound understanding of the relationships among programmes, the contexts in which they operate, and the outcomes that result from them. © Blackwell Publishing Ltd 2013.

  7. Initial Approaches for Discovery of Undocumented Functionality in FPGAs

    DTIC Science & Technology

    2017-03-01

    commercial pressures such as IP protection, support cost, and time to market , modern COTS devices contain many functions that are not exposed to the... market pressures have increased, industry increasingly uses the current generation device to do trial runs of next-generation architecture features...the product of industry operating in a highly cost competitive market , and are not inserted with malicious intent, however, this does not preclude

  8. Real-Time Digital Signal Processing Based on FPGAs for Electronic Skin Implementation †

    PubMed Central

    Ibrahim, Ali; Gastaldo, Paolo; Chible, Hussein; Valle, Maurizio

    2017-01-01

    Enabling touch-sensing capability would help appliances understand interaction behaviors with their surroundings. Many recent studies are focusing on the development of electronic skin because of its necessity in various application domains, namely autonomous artificial intelligence (e.g., robots), biomedical instrumentation, and replacement prosthetic devices. An essential task of the electronic skin system is to locally process the tactile data and send structured information either to mimic human skin or to respond to the application demands. The electronic skin must be fabricated together with an embedded electronic system which has the role of acquiring the tactile data, processing, and extracting structured information. On the other hand, processing tactile data requires efficient methods to extract meaningful information from raw sensor data. Machine learning represents an effective method for data analysis in many domains: it has recently demonstrated its effectiveness in processing tactile sensor data. In this framework, this paper presents the implementation of digital signal processing based on FPGAs for tactile data processing. It provides the implementation of a tensorial kernel function for a machine learning approach. Implementation results are assessed by highlighting the FPGA resource utilization and power consumption. Results demonstrate the feasibility of the proposed implementation when real-time classification of input touch modalities are targeted. PMID:28287448

  9. On Multiple AER Handshaking Channels Over High-Speed Bit-Serial Bidirectional LVDS Links With Flow-Control and Clock-Correction on Commercial FPGAs for Scalable Neuromorphic Systems.

    PubMed

    Yousefzadeh, Amirreza; Jablonski, Miroslaw; Iakymchuk, Taras; Linares-Barranco, Alejandro; Rosado, Alfredo; Plana, Luis A; Temple, Steve; Serrano-Gotarredona, Teresa; Furber, Steve B; Linares-Barranco, Bernabe

    2017-10-01

    Address event representation (AER) is a widely employed asynchronous technique for interchanging "neural spikes" between different hardware elements in neuromorphic systems. Each neuron or cell in a chip or a system is assigned an address (or ID), which is typically communicated through a high-speed digital bus, thus time-multiplexing a high number of neural connections. Conventional AER links use parallel physical wires together with a pair of handshaking signals (request and acknowledge). In this paper, we present a fully serial implementation using bidirectional SATA connectors with a pair of low-voltage differential signaling (LVDS) wires for each direction. The proposed implementation can multiplex a number of conventional parallel AER links for each physical LVDS connection. It uses flow control, clock correction, and byte alignment techniques to transmit 32-bit address events reliably over multiplexed serial connections. The setup has been tested using commercial Spartan6 FPGAs attaining a maximum event transmission speed of 75 Meps (Mega events per second) for 32-bit events at a line rate of 3.0 Gbps. Full HDL codes (vhdl/verilog) and example demonstration codes for the SpiNNaker platform will be made available.

  10. Real-Time Digital Signal Processing Based on FPGAs for Electronic Skin Implementation.

    PubMed

    Ibrahim, Ali; Gastaldo, Paolo; Chible, Hussein; Valle, Maurizio

    2017-03-10

    Enabling touch-sensing capability would help appliances understand interaction behaviors with their surroundings. Many recent studies are focusing on the development of electronic skin because of its necessity in various application domains, namely autonomous artificial intelligence (e.g., robots), biomedical instrumentation, and replacement prosthetic devices. An essential task of the electronic skin system is to locally process the tactile data and send structured information either to mimic human skin or to respond to the application demands. The electronic skin must be fabricated together with an embedded electronic system which has the role of acquiring the tactile data, processing, and extracting structured information. On the other hand, processing tactile data requires efficient methods to extract meaningful information from raw sensor data. Machine learning represents an effective method for data analysis in many domains: it has recently demonstrated its effectiveness in processing tactile sensor data. In this framework, this paper presents the implementation of digital signal processing based on FPGAs for tactile data processing. It provides the implementation of a tensorial kernel function for a machine learning approach. Implementation results are assessed by highlighting the FPGA resource utilization and power consumption. Results demonstrate the feasibility of the proposed implementation when real-time classification of input touch modalities are targeted.

  11. Multicasting mesh AER: a scalable assembly approach for reconfigurable neuromorphic structured AER systems. Application to ConvNets.

    PubMed

    Zamarreno-Ramos, C; Linares-Barranco, A; Serrano-Gotarredona, T; Linares-Barranco, B

    2013-02-01

    This paper presents a modular, scalable approach to assembling hierarchically structured neuromorphic Address Event Representation (AER) systems. The method consists of arranging modules in a 2D mesh, each communicating bidirectionally with all four neighbors. Address events include a module label. Each module includes an AER router which decides how to route address events. Two routing approaches have been proposed, analyzed and tested, using either destination or source module labels. Our analyses reveal that depending on traffic conditions and network topologies either one or the other approach may result in better performance. Experimental results are given after testing the approach using high-end Virtex-6 FPGAs. The approach is proposed for both single and multiple FPGAs, in which case a special bidirectional parallel-serial AER link with flow control is exploited, using the FPGA Rocket-I/O interfaces. Extensive test results are provided exploiting convolution modules of 64 × 64 pixels with kernels with sizes up to 11 × 11, which process real sensory data from a Dynamic Vision Sensor (DVS) retina. One single Virtex-6 FPGA can hold up to 64 of these convolution modules, which is equivalent to a neural network with 262 × 10(3) neurons and almost 32 million synapses.

  12. An Undergraduate Course and Laboratory in Digital Signal Processing with Field Programmable Gate Arrays

    ERIC Educational Resources Information Center

    Meyer-Base, U.; Vera, A.; Meyer-Base, A.; Pattichis, M. S.; Perry, R. J.

    2010-01-01

    In this paper, an innovative educational approach to introducing undergraduates to both digital signal processing (DSP) and field programmable gate array (FPGA)-based design in a one-semester course and laboratory is described. While both DSP and FPGA-based courses are currently present in different curricula, this integrated approach reduces the…

  13. School Inclusion Programmes (SIPS)

    ERIC Educational Resources Information Center

    Drossinou-Korea, Maria; Matousi, Dimitra; Panopoulos, Nikolaos; Paraskevopoulou, Aikaterini

    2016-01-01

    The purpose of this work was to understand the school inclusion programmes (SIPs) for students with special educational needs (SEN). The methodology was conducted in the field of special education (SE) and focuses on three case studies of students who was supported by SIPs. The Targeted, Individual, Structured, Inclusion Programme for students…

  14. Seven years of the field epidemiology training programme (FETP) at Chennai, Tamil Nadu, India: an internal evaluation

    PubMed Central

    2012-01-01

    Background During 2001–2007, the National Institute of Epidemiology (NIE), Chennai, Tamil Nadu, India admitted 80 trainees in its two-year Field Epidemiology Training Programme (FETP). We evaluated the first seven years of the programme to identify strengths and weaknesses. Methods We identified core components of the programme and broke them down into input, process, output and outcome. We developed critical indicators to reflect the logic model. We reviewed documents including fieldwork reports, abstracts listed in proceedings and papers published in Medline-indexed journals. We conducted an anonymous online survey of the graduates to collect information on self-perceived competencies, learning activities, field assignments, supervision, curriculum, relevance to career goals, strengths and weaknesses. Results Of the 80 students recruited during 2001–2007, 69 (86%) acquired seven core competencies (epidemiology, surveillance, outbreaks, research, human subjects protection, communication and management) and graduated through completion of at least six field assignments. The faculty-to-student ratio ranged between 0.4 and 0.12 (expected: 0.25). The curriculum was continuously adapted with all resources available on-line. Fieldwork led to the production of 158 scientific communications presented at international meetings and to 29 manuscripts accepted in indexed, peer-reviewed journals. The online survey showed that while most graduates acquired competencies, unmet needs persisted in laboratory sciences, data analysis tools and faculty-to-student ratio. Conclusions NIE adapted the international FETP model to India. However, further efforts are required to scale up the programme and to develop career tracks for field epidemiologists in the country. PMID:23013473

  15. Those Who Can, Teach: The Academic Quality of Preservice Students in Teacher Education Programmes in Taiwan

    ERIC Educational Resources Information Center

    Wang, Hsiou-Huai; Huang, Chin-Chun

    2016-01-01

    Difficulty in recruiting high-calibre individuals into teaching is a perennial issue in the field of teacher education. In some countries, students in teacher programmes are in general found to be lower in academic standing than their counterparts in other fields, which might lead to belief in the old saying that "those who cannot,…

  16. The Broad Effectiveness of Seventy-Four Field Instances of Abstinence-Based Programming

    ERIC Educational Resources Information Center

    Birch, Paul James; White, Joseph M.; Fellows, Kaylene

    2017-01-01

    Evaluations of a large federally funded sexual risk avoidance education (SRAE) efforts in the USA have not been widely reported in the wake of funding cuts. The purpose of this study is to report results from a broad set of programmes to demonstrate the breadth of field effectiveness of these programmes. Twenty-seven separate community-based SRAE…

  17. Systems and methods for detecting a failure event in a field programmable gate array

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Herath, Jeffrey A. (Inventor)

    2009-01-01

    An embodiment generally relates to a method of self-detecting an error in a field programmable gate array (FPGA). The method includes writing a signature value into a signature memory in the FPGA and determining a conclusion of a configuration refresh operation in the FPGA. The method also includes reading an outcome value from the signature memory.

  18. Advanced Wireless Integrated Navy Network (AWINN)

    DTIC Science & Technology

    2005-12-31

    handle high data rates using COTS FPGAs . The effort of the Cross-Layer Optimization group is focused on cross-layer design of UWB for position location...From Transmitter Boar1 To Receiver BoardTransmittedl Receiver i i.. Switch Lowpass -20 dB FPGA -2dB Filter Gain Controlled Gain Variable Attenuator... FPGA Code * April - June 2006 "o Demonstrate Transceiver Operation "o Integrate Transceiver with Other AWINN Activities Personnel: Chris R. Anderson

  19. Skills Utilisation at Work, the Quality of the Study Programme and Fields of Study

    ERIC Educational Resources Information Center

    Støren, Liv Anne; Arnesen, Clara Åse

    2016-01-01

    This paper examines the factors that may have impact on the extent to which the knowledge and skills of master's degree graduates in Norway are utilised at work, three years after graduation. The focus is on the impact of the quality of the study programme as well as the graduates' fields of study, when also taking into account other factors…

  20. Exploring the Development of Existing Sex Education Programmes for People with Intellectual Disabilities: An Intervention Mapping Approach

    ERIC Educational Resources Information Center

    Schaafsma, Dilana; Stoffelen, Joke M. T.; Kok, Gerjo; Curfs, Leopold M. G.

    2013-01-01

    Background: People with intellectual disabilities face barriers that affect their sexual health. Sex education programmes have been developed by professionals working in the field of intellectual disabilities with the aim to overcome these barriers. The aim of this study was to explore the development of these programmes. Methods: Sex education…

  1. New pathways in the evaluation of programmes for men who perpetrate violence against their female partners.

    PubMed

    Wojnicka, Katarzyna; Scambor, Christian; Kraus, Heinrich

    2016-08-01

    Today, evaluation research in the field of intervention programmes for men who perpetrate violence against their female partners still makes a fragmentary impression. Across Europe various evaluation studies have been performed. However, the methodologies applied are too heterogeneous to allow the combination of the results in a meta-analytical way. In this paper we propose a future pathway for organising outcome evaluation studies of domestic violence perpetrator programmes in community settings, so that today's problems in this field can be overcome. In a pragmatic framework that acknowledges the limited pre-conditions for evaluation studies in the area of domestic violence perpetrator programmes as it is today, feasible approaches for outcome evaluation are outlined, with recent developments in the field taken as starting points. The framework for organising future evaluation studies of work with perpetrators of domestic violence is presented together with a strategy to promote this framework. International networks of practitioners and researchers play a central role in this strategy through upskilling the area of practical work, preparing the ground for evaluation research and improving cooperation between practitioners and researchers. This paper is based on the results of the European funded project IMPACT (under the Daphne-III-funding programme of the European Commission). Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A novel digital pulse processing architecture for nuclear instrumentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moline, Yoann; Thevenin, Mathieu; Corre, Gwenole

    The field of nuclear instrumentation covers a wide range of applications, including counting, spectrometry, pulse shape discrimination and multi-channel coincidence. These applications are the topic of many researches, new algorithms and implementations are constantly proposed thanks to advances in digital signal processing. However, these improvements are not yet implemented in instrumentation devices. This is especially true for neutron-gamma discrimination applications which traditionally use charge comparison method while literature proposes other algorithms based on frequency domain or wavelet theory which show better performances. Another example is pileups which are generally rejected while pileup correction algorithms also exist. These processes are traditionallymore » performed offline due to two issues. The first is the Poissonian characteristic of the signal, composed of random arrival pulses which requires to current architectures to work in data flow. The second is the real-time requirement, which implies losing pulses when the pulse rate is too high. Despite the possibility of treating the pulses independently from each other, current architectures paralyze the acquisition of the signal during the processing of a pulse. This loss is called dead-time. These two issues have led current architectures to use dedicated solutions based on re-configurable components like Field Programmable Gate Arrays (FPGAs) to overcome the need of performance necessary to deal with dead-time. However, dedicated hardware algorithm implementations on re-configurable technologies are complex and time-consuming. For all these reasons, a programmable Digital pulse Processing (DPP) architecture in a high level language such as Cor C++ which can reduce dead-time would be worthwhile for nuclear instrumentation. This would reduce prototyping and test duration by reducing the level of hardware expertise to implement new algorithms. However, today's programmable solutions do not meet the need of performance to operate online and not allow scaling with the increase in the number of measurement channel. That is why an innovative DPP architecture is proposed in this paper. This architecture is able to overcome dead-time while being programmable and is flexible with the number of measurement channel. Proposed architecture is based on an innovative execution model for pulse processing applications which can be summarized as follow. The signal is not composed of pulses only, consequently, pulses processing does not have to operate on the entire signal. Therefore, the first step of our proposal is pulse extraction by the use of dedicated components named pulse extractors. The triggering step can be achieved after the analog-to-digital conversion without any signal shaping or filtering stages. Pileup detection and accurate pulse time stamping are done at this stage. Any application downstream this step can work on adaptive variable-sized array of samples simplifying pulse processing methods. Then, once the data flow is broken, it is possible to distribute pulses on Functional Units (FUs) which perform processing. As the date of each pulse is known, they can be processed individually out-of-order to provide the results. To manage the pulses distribution, a scheduler and an interconnection network are used. pulses are distributed on the first FU which is not busy without congesting the interconnection network. For this reason, the process duration does not result anymore in dead-time if there are enough FUs. FUs are designed to be standalone and to comprises at least a programmable general purpose processor (ARM, Microblaze) allowing the implementation of complex algorithms without any modification of the hardware. An acquisition chain is composed of a succession of algorithms which lead to organize our FUs as a software macro-pipeline, A simple approach consists in assigning one algorithm per FU. Consequently, the global latency becomes the worst latency of algorithms execution on FU. Moreover, as algorithms are executed locally - i.e. on a FU - this approach limits shared memory requirement. To handle multichannel, we propose FUs sharing, this approach maximize the chance to find a non-busy FU to process an incoming pulse. This is possible since each channel receive random event independently, the pulse extractors associated to them do not necessarily need to access simultaneously to all Computing resources at the same time to distribute their pulses. The major contribution of this paper is the proposition of an execution model and its associated hardware programmable architecture for digital pulse processing that can handle multiple acquisition channels while maintaining the scalability thanks to the use of shared resources. This execution model and associated architecture are validated by simulation of a cycle accurate architecture SystemC model. Proposed architecture shows promising results in terms of scalability while maintaining zero dead-time. This work also permit the sizing of hardware resources requirement required for a predefined set of applications. Future work will focus on the interconnection network and a scheduling policy that can exploit the variable-length of pulses. Then, the hardware implementation of this architecture will be performed and tested for a representative set of application.« less

  3. Stability of Programmable Shunt Valve Settings with Simultaneous Use of the Optune Transducer Array: A Case Report.

    PubMed

    Chan, Andrew K; Birk, Harjus S; Winkler, Ethan A; Viner, Jennifer A; Taylor, Jennie W; McDermott, Michael W

    2016-07-07

    The Optune® transducer array (Novocure Ltd., Haifa, Israel) is an FDA-approved noninvasive regional therapy that aims to inhibit the growth of glioblastoma multiforme (GBM) cells via utilization of alternating electric fields. Some patients with GBM may develop hydrocephalus and benefit from subsequent shunt placement, but special attention must be paid to patients in whom programmable valves are utilized, given the potential effect of the magnetic fields on valve settings. We present the first case report illustrating the stability of programmable shunt valve settings in a neurosurgical patient undergoing therapy with the Optune device. In this study, shunt valve settings were stable over a period of five days despite Optune therapy. This is reassuring for patients with GBM who require simultaneous treatment with both the Optune device and a programmable shunt system.

  4. Serial data acquisition for GEM-2D detector

    NASA Astrophysics Data System (ADS)

    Kolasinski, Piotr; Pozniak, Krzysztof T.; Czarski, Tomasz; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech; Zienkiewicz, Pawel; Mazon, Didier; Malard, Philippe; Herrmann, Albrecht; Vezinet, Didier

    2014-11-01

    This article debates about data fast acquisition and histogramming method for the X-ray GEM detector. The whole process of histogramming is performed by FPGA chips (Spartan-6 series from Xilinx). The results of the histogramming process are stored in an internal FPGA memory and then sent to PC. In PC data is merged and processed by MATLAB. The structure of firmware functionality implemented in the FPGAs is described. Examples of test measurements and results are presented.

  5. Exploring Accelerating Science Applications with FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storaasli, Olaf O; Strenski, Dave

    2007-01-01

    FPGA hardware and tools (VHDL, Viva, MitrionC and CHiMPS) are described. FPGA performance is evaluated on two Cray XD1 systems (Virtex-II Pro 50 and Virtex-4 LX160) for human genome (DNA and protein) sequence comparisons for a computational biology code (FASTA). Scalable FPGA speedups of 50X (Virtex-II) and 100X (Virtex-4) over a 2.2 GHz Opteron were achieved. Coding and IO issues faced for human genome data are described.

  6. Coarse Grain Reconfigurable ASIC through Multiplexer Based Switches

    DTIC Science & Technology

    2015-09-15

    chip area (0.5 mm2), and from simulation their power consumption is negligible (0.002% from simulation, too small to measure in physical system...performing implementation that is also flexible. REFERENCES [1] I. Kuon and J. Rose, “ Measuring the gap between FPGAs and ASICs,” IEEE Trans...A 3GPP- LTE Example," Solid-State Circuits, IEEE Journal of , vol.47, no.3, pp.757,768, March 2012. [5] Agarwal, A.; Hassanieh, H.; Abari, O

  7. Dynamic partial reconfiguration of logic controllers implemented in FPGAs

    NASA Astrophysics Data System (ADS)

    Bazydło, Grzegorz; Wiśniewski, Remigiusz

    2016-09-01

    Technological progress in recent years benefits in digital circuits containing millions of logic gates with the capability for reprogramming and reconfiguring. On the one hand it provides the unprecedented computational power, but on the other hand the modelled systems are becoming increasingly complex, hierarchical and concurrent. Therefore, abstract modelling supported by the Computer Aided Design tools becomes a very important task. Even the higher consumption of the basic electronic components seems to be acceptable because chip manufacturing costs tend to fall over the time. The paper presents a modelling approach for logic controllers with the use of Unified Modelling Language (UML). Thanks to the Model Driven Development approach, starting with a UML state machine model, through the construction of an intermediate Hierarchical Concurrent Finite State Machine model, a collection of Verilog files is created. The system description generated in hardware description language can be synthesized and implemented in reconfigurable devices, such as FPGAs. Modular specification of the prototyped controller permits for further dynamic partial reconfiguration of the prototyped system. The idea bases on the exchanging of the functionality of the already implemented controller without stopping of the FPGA device. It means, that a part (for example a single module) of the logic controller is replaced by other version (called context), while the rest of the system is still running. The method is illustrated by a practical example by an exemplary Home Area Network system.

  8. Design of FPGA ICA for hyperspectral imaging processing

    NASA Astrophysics Data System (ADS)

    Nordin, Anis; Hsu, Charles C.; Szu, Harold H.

    2001-03-01

    The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.

  9. A software framework for pipelined arithmetic algorithms in field programmable gate arrays

    NASA Astrophysics Data System (ADS)

    Kim, J. B.; Won, E.

    2018-03-01

    Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.

  10. Programmable valve shunts: are they really better?

    PubMed

    Kataria, Rashim; Kumar, Vimal; Mehta, Veer Singh

    2012-01-01

    Programmable valve shunts allows selection of opening pressure of shunt valve. In the presented article, a unique complication pertaining to programmable shunts has been discussed. A 5-year-old boy who had tectal plate low grade glioma with obstructive hydrocephalus was managed with Codman programmable ventriculoperitoneal shunt. There was a spontaneous change in the opening pressure of the shunt valve leading to shunt malfunction. Routinely used household appliances produce a magnetic field strong enough to cause change in the setting of shunt valve pressure and may lead to valve malfunction. Other causes of programmable valve malfunction also discussed.

  11. Exploring the development of existing sex education programmes for people with intellectual disabilities: an intervention mapping approach.

    PubMed

    Schaafsma, Dilana; Stoffelen, Joke M T; Kok, Gerjo; Curfs, Leopold M G

    2013-03-01

    People with intellectual disabilities face barriers that affect their sexual health. Sex education programmes have been developed by professionals working in the field of intellectual disabilities with the aim to overcome these barriers. The aim of this study was to explore the development of these programmes. Sex education programmes geared to people with intellectual disabilities were examined in the context of the Intervention Mapping protocol. Data were obtained via interviews with the programme developers. All programmes lack specific programme outcomes, do not have a theoretical basis, did not involve members of relevant groups in the development process and lack systematic evaluation. Based on our findings and the literature, we conclude that these programmes are unlikely to be effective. Future programmes should be developed using a more systematic and theory- and evidence-based approach. © 2012 Blackwell Publishing Ltd.

  12. Automatic Digital Hardware Synthesis

    DTIC Science & Technology

    1990-09-01

    VHDL to PALASM, a hardware synthesis language. The PALASM description is then directly implemented into a field programmable gate array (FPGAI using...process of translating VHDL to PALASM, a hardware synthesis language. The PALASM description is then directly implemented into a field programmable gate...allows the engineer to use VHDL to create and validate a design, and then to implement it in a gate array. The development of software o translate VHDL

  13. Roll Angle Estimation Using Thermopiles for a Flight Controlled Mortar

    DTIC Science & Technology

    2012-06-01

    Using Xilinx’s System generator, the entire design was implemented at a relatively high level within Malab’s Simulink. This allowed VHDL code to...thermopile data with a Recursive Least Squares (RLS) filter implemented on a field programmable gate array (FPGA). These results demonstrate the...accurately estimated by processing the thermopile data with a Recursive Least Squares (RLS) filter implemented on a field programmable gate array (FPGA

  14. Maladjustment of programmable ventricular shunt valves by inadvertent exposure to a common hospital device.

    PubMed

    Fujimura, R; Lober, R; Kamian, K; Kleiner, L

    2018-01-01

    Programmable ventricular shunt valves are commonly used to treat hydrocephalus. They can be adjusted to allow for varying amounts of cerebrospinal fluid (CSF) flow using an external magnetic programming device, and are susceptible to maladjustment from inadvertent exposure to magnetic fields. We describe the case of a 3-month-old girl treated for hydrocephalus with a programmable Strata TM II valve found at the incorrect setting on multiple occasions during her hospitalization despite frequent reprogramming and surveillance. We found that the Vocera badge, a common hands-free wireless communication system worn by our nursing staff, had a strong enough magnetic field to unintentionally change the shunt setting. The device is worn on the chest bringing it into close proximity to the shunt valve when care providers hold the baby, resulting in the maladjustment. Some commonly used medical devices have a magnetic field strong enough to alter programmable shunt valve settings. Here, we report that the magnetic field from the Vocera hands-free wireless communication system, combined with the worn position, results in shunt maladjustment for the Strata TM II valve. Healthcare facilities using the Vocera badges need to put protocols in place and properly educate staff members to ensure the safety of patients with Strata TM II valves.

  15. English-Medium Programmes at Austrian Business Faculties: A Status Quo Survey on National Trends and a Case Study on Programme Design and Delivery

    ERIC Educational Resources Information Center

    Unterberger, Barbara

    2012-01-01

    Internationalisation processes have accelerated the implementation of English-medium programmes (EMPs) across European higher education institutions. The field of business and management studies has been particularly affected by this trend (Wachter & Maiworm 2008: 46) with numerous new EMPs introduced each year. This paper presents key…

  16. NEPP Update of Independent Single Event Upset Field Programmable Gate Array Testing

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Label, Kenneth; Campola, Michael; Pellish, Jonathan

    2017-01-01

    This presentation provides a NASA Electronic Parts and Packaging (NEPP) Program update of independent Single Event Upset (SEU) Field Programmable Gate Array (FPGA) testing including FPGA test guidelines, Microsemi RTG4 heavy-ion results, Xilinx Kintex-UltraScale heavy-ion results, Xilinx UltraScale+ single event effect (SEE) test plans, development of a new methodology for characterizing SEU system response, and NEPP involvement with FPGA security and trust.

  17. Making the Most Out of School-Based Prevention: Lessons from the Social and Emotional Aspects of Learning (SEAL) Programme

    ERIC Educational Resources Information Center

    Humphrey, Neil; Lendrum, Ann; Wigelsworth, Michael

    2013-01-01

    This paper considers the role played by universal, school-based social and emotional learning (SEL) programmes in addressing the mental health needs of children and young people. Theory and research in the field are discussed. Particular attention is paid to the social and emotional aspects of learning (SEAL) programme in England, a flagship…

  18. Field-programmable logic devices with optical input-output.

    PubMed

    Szymanski, T H; Saint-Laurent, M; Tyan, V; Au, A; Supmonchai, B

    2000-02-10

    A field-programmable logic device (FPLD) with optical I/O is described. FPLD's with optical I/O can have their functionality specified in the field by means of downloading a control-bit stream and can be used in a wide range of applications, such as optical signal processing, optical image processing, and optical interconnects. Our device implements six state-of-the-art dynamically programmable logic arrays (PLA's) on a 2 mm x 2 mm die. The devices were fabricated through the Lucent Technologies-Advanced Research Projects Agency-Consortium for Optical and Optoelectronic Technologies in Computing (Lucent/ARPA/COOP) workshop by use of 0.5-microm complementary metal-oxide semiconductor-self-electro-optic device technology and were delivered in 1998. All devices are fully functional: The electronic data paths have been verified at 200 MHz, and optical tests are pending. The device has been programmed to implement a two-stage optical switching network with six 4 x 4 crossbar switches, which can realize more than 190 x 10(6) unique programmable input-output permutations. The same device scaled to a 2 cm x 2 cm substrate could support as many as 4000 optical I/O and 1 Tbit/s of optical I/O bandwidth and offer fully programmable digital functionality with approximately 110,000 programmable logic gates. The proposed optoelectronic FPLD is also ideally suited to realizing dense, statically reconfigurable crossbar switches. We describe an attractive application area for such devices: a rearrangeable three-stage optical switch for a wide-area-network backbone, switching 1000 traffic streams at the OC-48 data rate and supporting several terabits of traffic.

  19. Re-Form: FPGA-Powered True Codesign Flow for High-Performance Computing In The Post-Moore Era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappello, Franck; Yoshii, Kazutomo; Finkel, Hal

    Multicore scaling will end soon because of practical power limits. Dark silicon is becoming a major issue even more than the end of Moore’s law. In the post-Moore era, the energy efficiency of computing will be a major concern. FPGAs could be a key to maximizing the energy efficiency. In this paper we address severe challenges in the adoption of FPGA in HPC and describe “Re-form,” an FPGA-powered codesign flow.

  20. FPGA based charge fast histogramming for GEM detector

    NASA Astrophysics Data System (ADS)

    Poźniak, Krzysztof T.; Byszuk, A.; Chernyshova, M.; Cieszewski, R.; Czarski, T.; Dominik, W.; Jakubowska, K.; Kasprowicz, G.; Rzadkiewicz, J.; Scholz, M.; Zabolotny, W.

    2013-10-01

    This article presents a fast charge histogramming method for the position sensitive X-ray GEM detector. The energy resolved measurements are carried out simultaneously for 256 channels of the GEM detector. The whole process of histogramming is performed in 21 FPGA chips (Spartan-6 series from Xilinx) . The results of the histogramming process are stored in an external DDR3 memory. The structure of an electronic measuring equipment and a firmware functionality implemented in the FPGAs is described. Examples of test measurements are presented.

  1. Fast data transmission from serial data acquisition for the GEM detector system

    NASA Astrophysics Data System (ADS)

    Kolasinski, Piotr; Pozniak, Krzysztof T.; Czarski, Tomasz; Byszuk, Adrian; Chernyshova, Maryna; Kasprowicz, Grzegorz; Krawczyk, Rafal D.; Wojenski, Andrzej; Zabolotny, Wojciech

    2015-09-01

    This article proposes new method of storing data and transferring it to PC in the X-ray GEM detector system. The whole process is performed by FPGA chips (Spartan-6 series from Xilinx). Comparing to previous methods, new approach allows to store much more data in the system. New, improved implementation of the communication algorithm significantly increases transfer rate between system and PC. In PC data is merged and processed by MATLAB. The structure of firmware implemented in the FPGAs is described.

  2. Implementation of a Fault Tolerant Control Unit within an FPGA for Space Applications

    DTIC Science & Technology

    2006-12-01

    Conference 2002, September 2002. [20] M. Alderighi, A. Candelori, F. Casini, S. D’Angelo, M. Mancini, A. Paccagnella, S. Pastore , G.R. Sechi, “Heavy...Luigi Carro and Ricardo Reis , “Designing and Testing Fault-Tolerant Techniques for SRAM-based FPGAs,” in Proc. 1st Conference on Computer Frontiers, pp...susceptibility,” in IEEE Proc. 12th IEEE Intl. Symposium on On-Line Testing, pp. 89-91, 2006. [45] Fernanda Lima, Luigi Carro and Ricardo Reis

  3. Satellite-Friendly Protocols and Standards

    NASA Astrophysics Data System (ADS)

    Koudelka, O.; Schmidt, M.; Ebert, J.; Schlemmer, H.; Kastner, S.; Riedler, W.

    2002-01-01

    We are currently observing a development unprecedented with other services, the enormous growth of the Internet. Video, voice and data applications can be supported via this network in high quality. Multi-media applications require high bandwidth which may not be available in many areas. When making proper use of the broadcast feature of a communications satellite, the performance of the satellite-based system can compare favourably to terrestrial solutions. Internet applications are in many cases highly asymmetric, making them very well suited to applications using small and inexpensive terminals. Data from one source may be used simultaneously by a large number of users. The Internet protocol suite has become the de-facto standard. But this protocol family in its original form has not been designed to support guaranteed quality of service, a prerequisite for real-time, high quality traffic. The Internet Protocol has to be adapted for the satellite environment, because long roundtrip delays and the error behaviour of the channel could make it inefficient over a GEO satellite. Another requirement is to utilise the satellite bandwidth as efficiently as possible. This can be achieved by adapting the access system to the nature of IP frames, which are variable in length. In the framework of ESA's ARTES project a novel satellite multimedia system was developed which utilises Multi-Frequency TDMA in a meshed network topology. The system supports Quality of Service (QoS) by reserving capacity with different QoS requirements. The system is centrally controlled by a master station with the implementation of a demand assignment (DAMA) system. A lean internal signalling system has been adopted. Network management is based on the SNMP protocol and industry-standard network management platforms, making interfaces to standard accounting and billing systems easy. Modern communication systems will have to be compliant to different standards in a very flexible manner. The developed system is based on a hardware architecture using FPGAs (Field-Programmable Gate Arrays). This provides means to configure the satellite gateway for different standards and to optimise the transmission parameters for varying user traffic, thus increasing the efficiency significantly. The paper describes the flexible system architecture and focuses particularly on the DAMA access scheme and the chosen quality-of-service implementation. Emphasis has been put on the support of IP Version 6. Different standards (e.g. RCS and possible follow-ups) and the possibility to support them are discussed.

  4. Structural and Functional Biomedical Imaging Using Polarization-Based Optical Coherence Tomography

    NASA Astrophysics Data System (ADS)

    Black, Adam J.

    Biomedical imaging has had an enormous impact in medicine and research. There are numerous imaging modalities covering a large range of spatial and temporal scales, penetration depths, along with indicators for function and disease. As these imaging technologies mature, the quality of the images they produce increases to resolve finer details with greater contrast at higher speeds which aids in a faster, more accurate diagnosis in the clinic. In this dissertation, polarization-based optical coherence tomography (OCT) systems are used and developed to image biological structure and function with greater speeds, signal-to-noise (SNR) and stability. OCT can image with spatial and temporal resolutions in the micro range. When imaging any sample, feedback is very important to verify the fidelity and desired location on the sample being imaged. To increase frame rates for display as well as data throughput, field-programmable gate arrays (FPGAs) were used with custom algorithms to realize real-time display and streaming output for continuous acquisition of large datasets of swept-source OCT systems. For spectral domain (SD) OCT systems, significant increases in signal-to-noise ratios were achieved from a custom balanced detection (BD) OCT system. The BD system doubled measured signals while reducing common term. For functional imaging, a real-time directed scanner was introduced to visualize the 3D image of a sample to identify regions of interest prior to recording. Elucidating the characteristics of functional OCT signals with the aid of simulations, novel processing methods were also developed to stabilize samples being imaged and identify possible origins of functional signals being measured. Polarization-sensitive OCT was used to image cardiac tissue before and after clearing to identify the regions of vascular perfusion from a coronary artery. The resulting 3D image provides a visualization of the perfusion boundaries for the tissue that would be damaged from a myocardial infarction to possibly identity features that lead to fatal cardiac arrhythmias. 3D functional imaging was used to measure functional retinal activity from a light stimulus. In some cases, single trial responses were possible; measured at the outer segment of the photoreceptor layer. The morphology and time-course of these signals are similar to the intrinsic optical signals reported from phototransduction. Assessing function in the retina could aid in early detection of degenerative diseases of the retina, such as glaucoma and macular degeneration.

  5. Structured Doctoral Education in Hannover - Joint Programme IMPRS-GW and geo-Q RTG

    NASA Astrophysics Data System (ADS)

    Kawazoe, Fumiko; Bruns, Sandra

    2018-02-01

    Two structured doctoral programmes that we have in Hannover, the IMPRS on Gravitational Wave Astronomy and SFB on relativistic geodesy and gravimetry with quantum sensors geo-Q, have not only become major resources for education in each field but have also started to provide substantial synergy to members of both programmes. Our strong crossdisciplinary approach to create a joint programme has received excellent feedback not only from researchers inside the programme but also from various external committee. Building on experience that we have acquired over the last decade, we propose to set up a common doctoral programme within the international gravitational wave astronomy and physics. We envisage that with a common doctoral programme we will create a strong team of young researchers who will carry on building a strong network of third generation gravitational wave detectors and observatories.

  6. Single-Event Effect (SEE) Survey of Advanced Reconfigurable Field Programmable Gate Arrays: NASA Electronic Parts and Packaging (NEPP) Program Office of Safety and Mission Assurance

    NASA Technical Reports Server (NTRS)

    Allen, Gregory

    2011-01-01

    The NEPP Reconfigurable Field-Programmable Gate Array (FPGA) task has been charged to evaluate reconfigurable FPGA technologies for use in space. Under this task, the Xilinx single-event-immune, reconfigurable FPGA (SIRF) XQR5VFX130 device was evaluated for SEE. Additionally, the Altera Stratix-IV and SiliconBlue iCE65 were screened for single-event latchup (SEL).

  7. Field Evaluation of Programmable Thermostats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sachs, O.; Tiefenbeck, V.; Duvier, C.

    2012-12-01

    Prior research suggests that poor programmable thermostats usability may prevent their effective use to save energy. The Fraunhofer team hypothesized that home occupants with high-usability thermostats would be more likely to use them to save energy than people with a basic thermostats. In this report, the team discusses results of a project in which the team monitored and compared programmable thermostats with basic thermostats in an affordable housing apartment complex.

  8. A hospital-based child protection programme evaluation instrument: a modified Delphi study.

    PubMed

    Wilson, Denise; Koziol-McLain, Jane; Garrett, Nick; Sharma, Pritika

    2010-08-01

    Refine instrument for auditing hospital-based child abuse and neglect violence intervention programmes prior to field-testing. A modified Delphi study to identify and rate items and domains indicative of an effective and quality child abuse and neglect intervention programme. Experts participated in four Delphi rounds: two surveys, a one-day workshop and the opportunity to comment on the penultimate instrument. New Zealand. Twenty-four experts in the field of care and protection of children. Items with panel agreement >or=85% and mean importance rating >or=4.0 (scale from 1 (not important) to 5 (very important)). There was high-level consensus on items across Rounds 1 and 2 (89% and 85%, respectively). In Round 3 an additional domain (safety and security) was agreed upon and cultural issues, alert systems for children at risk, and collaboration among primary care, community, non-government and government agencies were discussed. The final instrument included nine domains ('policies and procedures', 'safety and security', 'collaboration', 'cultural environment', 'training of providers', 'intervention services', 'documentation' 'evaluation' and 'physical environment') and 64 items. The refined instrument represents the hallmarks of an ideal child abuse and neglect programme given current knowledge and experience. The instrument enables rigorous evaluations of hospital-based child abuse and neglect intervention programmes for quality improvement and benchmarking with other programmes.

  9. Field-Free Programmable Spin Logics via Chirality-Reversible Spin-Orbit Torque Switching.

    PubMed

    Wang, Xiao; Wan, Caihua; Kong, Wenjie; Zhang, Xuan; Xing, Yaowen; Fang, Chi; Tao, Bingshan; Yang, Wenlong; Huang, Li; Wu, Hao; Irfan, Muhammad; Han, Xiufeng

    2018-06-21

    Spin-orbit torque (SOT)-induced magnetization switching exhibits chirality (clockwise or counterclockwise), which offers the prospect of programmable spin-logic devices integrating nonvolatile spintronic memory cells with logic functions. Chirality is usually fixed by an applied or effective magnetic field in reported studies. Herein, utilizing an in-plane magnetic layer that is also switchable by SOT, the chirality of a perpendicular magnetic layer that is exchange-coupled with the in-plane layer can be reversed in a purely electrical way. In a single Hall bar device designed from this multilayer structure, three logic gates including AND, NAND, and NOT are reconfigured, which opens a gateway toward practical programmable spin-logic devices. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Strengthening Indonesia’s Field Epidemiology Training Programme to address International Health Regulations requirements

    PubMed Central

    Samaan, Gina; Santoso, Hari; Kushadiwijaya, Haripurnomo; Juwita, Ratna; Mohadir, Andi; Aditama, Tjandra

    2010-01-01

    Abstract Problem According to the International Health Regulations (IHR), countries need to strengthen core capacity for disease surveillance and response systems. Many countries are establishing or enhancing their field epidemiology training programmes (FETPs) to meet human resource needs but face challenges in sustainability and training quality. Indonesia is facing these challenges, which include limited resources for field training and limited coordination in a newly decentralized health system. Approach A national FETP workplan was developed based on an evaluation of the existing programme and projected human resource needs. A Ministry of Health Secretariat linking universities, national and international partners was established to oversee revision and implementation of the FETP. Local setting The FETP is integrated into the curriculum of Indonesian universities and field training is conducted in district and provincial health offices under the coordination of the universities and the FETP Secretariat. Relevant changes The FETP was included in the Ministry of Health workforce development strategy through governmental decree. Curricula have been enhanced and field placements strengthened to provide trainees with better learning experiences. To improve sustainability of the FETP, links were established with the Indonesian Epidemiologists’ Association, local governments and donors to cultivate future FETP champions and maintain funding. Courses, competitions and discussion forums were established for field supervisors and alumni. These changes have increased the geographic distribution of students, intersectoral and international participation and the quality of student performance. Lessons learnt The main lesson learnt is that linkages with universities, ministries and international agencies such as the World Health Organization are critical for building a sustainable high-quality programme. The most critical factors were development of trusting relationships and clear definitions of the responsibilities of each stakeholder. PMID:20428389

  11. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  12. Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale

    PubMed Central

    Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason

    2017-01-01

    With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft’s FPGA deployment in its Bing search engine and Intel’s 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems—like Apache Spark and Hadoop—to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster. PMID:28317049

  13. Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale.

    PubMed

    Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason

    2016-10-01

    With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft's FPGA deployment in its Bing search engine and Intel's 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems-like Apache Spark and Hadoop-to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster.

  14. Using Programmable Calculators to Solve Electrostatics Problems.

    ERIC Educational Resources Information Center

    Yerian, Stephen C.; Denker, Dennis A.

    1985-01-01

    Provides a simple routine which allows first-year physics students to use programmable calculators to solve otherwise complex electrostatic problems. These problems involve finding electrostatic potential and electric field on the axis of a uniformly charged ring. Modest programing skills are required of students. (DH)

  15. Programmable Pulse-Position-Modulation Encoder

    NASA Technical Reports Server (NTRS)

    Zhu, David; Farr, William

    2006-01-01

    A programmable pulse-position-modulation (PPM) encoder has been designed for use in testing an optical communication link. The encoder includes a programmable state machine and an electronic code book that can be updated to accommodate different PPM coding schemes. The encoder includes a field-programmable gate array (FPGA) that is programmed to step through the stored state machine and code book and that drives a custom high-speed serializer circuit board that is capable of generating subnanosecond pulses. The stored state machine and code book can be updated by means of a simple text interface through the serial port of a personal computer.

  16. Integrated care.

    PubMed

    Warwick-Giles, Lynsey; Checkland, Kath

    2018-03-19

    Purpose The purpose of this paper is to try and understand how several organisations in one area in England are working together to develop an integrated care programme. Weick's (1995) concept of sensemaking is used as a lens to examine how the organisations are working collaboratively and maintaining the programme. Design/methodology/approach Qualitative methods included: non-participant observations of meetings, interviews with key stakeholders and the collection of documents relating to the programme. These provided wider contextual information about the programme. Comprehensive field notes were taken during observations and analysed alongside interview transcriptions using NVIVO software. Findings This paper illustrates the importance of the construction of a shared identity across all organisations involved in the programme. Furthermore, the wider policy discourse impacted on how the programme developed and influenced how organisations worked together. Originality/value The role of leaders from all organisations involved in the programme was of significance to the overall development of the programme and the sustained momentum behind the programme. Leaders were able to generate a "narrative of success" to drive the programme forward. This is of particular relevance to evaluators, highlighting the importance of using multiple methods to allow researchers to probe beneath the surface of programmes to ensure that evidence moves beyond this public narrative.

  17. Field-programmable beam reconfiguring based on digitally-controlled coding metasurface

    NASA Astrophysics Data System (ADS)

    Wan, Xiang; Qi, Mei Qing; Chen, Tian Yi; Cui, Tie Jun

    2016-02-01

    Digital phase shifters have been applied in traditional phased array antennas to realize beam steering. However, the phase shifter deals with the phase of the induced current; hence, it has to be in the path of each element of the antenna array, making the phased array antennas very expensive. Metamaterials and/or metasurfaces enable the direct modulation of electromagnetic waves by designing subwavelength structures, which opens a new way to control the beam scanning. Here, we present a direct digital mechanism to control the scattered electromagnetic waves using coding metasurface, in which each unit cell loads a pin diode to produce binary coding states of “1” and “0”. Through data lines, the instant communications are established between the coding metasurface and the internal memory of field-programmable gate arrays (FPGA). Thus, we realize the digital modulation of electromagnetic waves, from which we present the field-programmable reflective antenna with good measurement performance. The proposed mechanism and functional device have great application potential in new-concept radar and communication systems.

  18. Developing Online Doctoral Programmes

    ERIC Educational Resources Information Center

    Chipere, Ngoni

    2015-01-01

    The objectives of the study were to identify best practices in online doctoral programming and to synthesise these practices into a framework for developing online doctoral programmes. The field of online doctoral studies is nascent and presents challenges for conventional forms of literature review. The literature was therefore reviewed using a…

  19. FPGA wavelet processor design using language for instruction-set architectures (LISA)

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios

    2007-04-01

    The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.

  20. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  1. Compact Low Power DPU for Plasma Instrument LINA on the Russian Luna-Glob Lander

    NASA Astrophysics Data System (ADS)

    Schmidt, Walter; Riihelä, Pekka; Kallio, Esa

    2013-04-01

    The Swedish Institute for Space Physics in Kiruna is bilding a Lunar Ions and Neutrals Analyzer (LINA) for the Russian Luna-Glob lander mission and its orbiter, to be launched around 2016 [1]. The Finnish Meteorological Institute is responsible for designing and building the central data processing units (DPU) for both instruments. The design details were optimized to serve as demonstrator also for a similar instrument on the Jupiter mission JUICE. To accommodate the originally set short development time and to keep the design between orbiter and Lander as similar as possible, the DPU is built around two re-programmable flash-based FPGAs from Actel. One FPGA contains a public-domain 32-bit processor core identical for both Lander and orbiter. The other FPGA handles all interfaces to the spacecraft system and the detectors, somewhat different for both implementations. Monitoring of analog housekeeping data is implemented as an IP-core from Stellamar inside the interface FPGA, saving mass, volume and especially power while simplifying the radiation protection design. As especially on the Lander the data retention before transfer to the orbiter cannot be guaranteed under all conditions, the DPU includes a Flash-PROM containing several software versions and data storage capability. With the memory management implemented inside the interface FPGA, one of the serial links can also be used as test port to verify the system, load the initial software into the Flash-PROM and to control the detector hardware directly without support by the processor and a ready developed operating system and software. Implementation and performance details will be presented. Reference: [1] http://www.russianspaceweb.com/luna_glob_lander.html.

  2. Programmable nanometer-scale electrolytic metal deposition and depletion

    DOEpatents

    Lee, James Weifu [Oak Ridge, TN; Greenbaum, Elias [Oak Ridge, TN

    2002-09-10

    A method of nanometer-scale deposition of a metal onto a nanostructure includes the steps of: providing a substrate having thereon at least two electrically conductive nanostructures spaced no more than about 50 .mu.m apart; and depositing metal on at least one of the nanostructures by electric field-directed, programmable, pulsed electrolytic metal deposition. Moreover, a method of nanometer-scale depletion of a metal from a nanostructure includes the steps of providing a substrate having thereon at least two electrically conductive nanostructures spaced no more than about 50 .mu.m apart, at least one of the nanostructures having a metal disposed thereon; and depleting at least a portion of the metal from the nanostructure by electric field-directed, programmable, pulsed electrolytic metal depletion. A bypass circuit enables ultra-finely controlled deposition.

  3. Different elution modes and field programming in gravitational field-flow fractionation. III. Field programming by flow-rate gradient generated by a programmable pump.

    PubMed

    Plocková, J; Chmelík, J

    2001-05-25

    Gravitational field-flow fractionation (GFFF) utilizes the Earth's gravitational field as an external force that causes the settlement of particles towards the channel accumulation wall. Hydrodynamic lift forces oppose this action by elevating particles away from the channel accumulation wall. These two counteracting forces enable modulation of the resulting force field acting on particles in GFFF. In this work, force-field programming based on modulating the magnitude of hydrodynamic lift forces was implemented via changes of flow-rate, which was accomplished by a programmable pump. Several flow-rate gradients (step gradients, linear gradients, parabolic, and combined gradients) were tested and evaluated as tools for optimization of the separation of a silica gel particle mixture. The influence of increasing amount of sample injected on the peak resolution under flow-rate gradient conditions was also investigated. This is the first time that flow-rate gradients have been implemented for programming of the resulting force field acting on particles in GFFF.

  4. A quasi-experimental feasibility study to determine the effect of a systematic treatment programme on the scores of the Nottingham Adjustment Scale of individuals with visual field deficits following stroke.

    PubMed

    Taylor, Lisa; Poland, Fiona; Harrison, Peter; Stephenson, Richard

    2011-01-01

    To evaluate a systematic treatment programme developed by the researcher that targeted aspects of visual functioning affected by visual field deficits following stroke. The study design was a non-equivalent control (conventional) group pretest-posttest quasi-experimental feasibility design, using multisite data collection methods at specified stages. The study was undertaken within three acute hospital settings as outpatient follow-up sessions. Individuals who had visual field deficits three months post stroke were studied. A treatment group received routine occupational therapy and an experimental group received, in addition, a systematic treatment programme. The treatment phase of both groups lasted six weeks. The Nottingham Adjustment Scale, a measure developed specifically for visual impairment, was used as the primary outcome measure. The change in Nottingham Adjustment Scale score was compared between the experimental (n = 7) and conventional (n = 8) treatment groups using the Wilcoxon signed ranks test. The result of Z = -2.028 (P = 0.043) showed that there was a statistically significant difference between the change in Nottingham Adjustment Scale score between both groups. The introduction of the systematic treatment programme resulted in a statistically significant change in the scores of the Nottingham Adjustment Scale.

  5. A modularized pulse programmer for NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Mao, Wenping; Bao, Qingjia; Yang, Liang; Chen, Yiqun; Liu, Chaoyang; Qiu, Jianqing; Ye, Chaohui

    2011-02-01

    A modularized pulse programmer for a NMR spectrometer is described. It consists of a networked PCI-104 single-board computer and a field programmable gate array (FPGA). The PCI-104 is dedicated to translate the pulse sequence elements from the host computer into 48-bit binary words and download these words to the FPGA, while the FPGA functions as a sequencer to execute these binary words. High-resolution NMR spectra obtained on a home-built spectrometer with four pulse programmers working concurrently demonstrate the effectiveness of the pulse programmer. Advantages of the module include (1) once designed it can be duplicated and used to construct a scalable NMR/MRI system with multiple transmitter and receiver channels, (2) it is a totally programmable system in which all specific applications are determined by software, and (3) it provides enough reserve for possible new pulse sequences.

  6. Application of segmented dental panoramic tomography among children: positive effect of continuing education in radiation protection

    PubMed Central

    Waltimo-Sirén, Janna; Laatikainen, Tuula; Haukka, Jari; Ekholm, Marja

    2016-01-01

    Objectives: Dental panoramic tomography is the most frequent examination among 7–12-year olds, according to the Radiation Safety and Nuclear Authority of Finland. At those ages, dental panoramic tomographs (DPTs) are mostly obtained for orthodontic reasons. Children's dose reduction by trimming the field size to the area of interest is important because of their high radiosensitivity. Yet, the majority of DPTs in this age group are still taken by using an adult programme and never by using a segmented programme. The purpose of the present study was to raise the awareness of dental staff with respect to children's radiation safety, to increase the application of segmented and child DPT programmes by further educating the whole dental team and to evaluate the outcome of the educational intervention. Methods: A five-step intervention programme, focusing on DPT field limitation possibilities, was carried out in community-based dental care as a part of mandatory continuing education in radiation protection. Application of segmented and child DPT programmes was thereafter prospectively followed up during a 1-year period and compared with our similar data from 2010 using a logistic regression analysis. Results: Application of the child programme increased by 9% and the segmented programme by 2%, reaching statistical significance (odds ratios 1.68; 95% confidence interval 1.23–2.30; p-value < 0.001). The number of repeated exposures remained at an acceptable level. The segmented DPTs were most frequently taken from the maxillary lateral incisor–canine area. Conclusions: The educational intervention resulted in improvement of radiological practice in respect to radiation safety of children during dental panoramic tomography. Segmented and child DPT programmes can be applied successfully in dental practice for children. PMID:27142159

  7. Application of segmented dental panoramic tomography among children: positive effect of continuing education in radiation protection.

    PubMed

    Pakbaznejad Esmaeili, Elmira; Waltimo-Sirén, Janna; Laatikainen, Tuula; Haukka, Jari; Ekholm, Marja

    2016-05-23

    Dental panoramic tomography is the most frequent examination among 7-12-year olds, according to the Radiation Safety and Nuclear Authority of Finland. At those ages, dental panoramic tomographs (DPTs) are mostly obtained for orthodontic reasons. Children's dose reduction by trimming the field size to the area of interest is important because of their high radiosensitivity. Yet, the majority of DPTs in this age group are still taken by using an adult programme and never by using a segmented programme. The purpose of the present study was to raise the awareness of dental staff with respect to children's radiation safety, to increase the application of segmented and child DPT programmes by further educating the whole dental team and to evaluate the outcome of the educational intervention. A five-step intervention programme, focusing on DPT field limitation possibilities, was carried out in community-based dental care as a part of mandatory continuing education in radiation protection. Application of segmented and child DPT programmes was thereafter prospectively followed up during a 1-year period and compared with our similar data from 2010 using a logistic regression analysis. Application of the child programme increased by 9% and the segmented programme by 2%, reaching statistical significance (odds ratios 1.68; 95% confidence interval 1.23-2.30; p-value < 0.001). The number of repeated exposures remained at an acceptable level. The segmented DPTs were most frequently taken from the maxillary lateral incisor-canine area. The educational intervention resulted in improvement of radiological practice in respect to radiation safety of children during dental panoramic tomography. Segmented and child DPT programmes can be applied successfully in dental practice for children.

  8. The EuroDIVERSITY Programme: Challenges of Biodiversity Science in Europe

    NASA Astrophysics Data System (ADS)

    Jonckheere, I.

    2009-04-01

    In close cooperation with its Member Organisations, the European Science Foundation (ESF) has launched since late 2003 a series of European Collaborative Research (EUROCORES) Programmes. Their aim is to enable researchers in different European countries to develop cooperation and scientific synergy in areas where European scale and scope are required in a global context. The EUROCORES instrument represents the first large scale attempt of national research (funding) agencies to act together against fragmentation, asynchronicity and duplication of research (funding) within Europe. Although covering all scientific fields, there are presently 13 EUROCORES Programmes dealing with cutting edge science in the fields of Earth, Climate and Environmental Sciences. The aim of the EuroDIVERSITY Programme is to support the emergence of an integrated biodiversity science based on an understanding of fundamental ecological and social processes that drive biodiversity changes and their impacts on ecosystem functioning and society. Ecological systems across the globe are being threatened or transformed at unprecedented rates from local to global scales due to the ever-increasing human domination of natural ecosystems. In particular, massive biodiversity changes are currently taking place, and this trend is expected to continue over the coming decades, driven by the increasing extension and globalisation of human affairs. The EuroDIVERSITY Programme meets the research need triggered by the increasing human footprint worldwide with a focus on generalisations across particular systems and on the generation and validation of theory relevant to experimental and empirical data. The EURODIVERSITY Programme tries to bridge the gaps between the natural and social sciences, between research work on terrestrial, freshwater and marine ecosystems, and between research work on plants, animals and micro-organisms. The Programme was launched in April 2006 and includes 10 international, multidisciplinary collaborative research projects, which are expected to contribute to this goal by initiating or strengthening major collaborative research efforts. Some projects are dealing primarily with microbial diversity (COMIX, METHECO, MiCROSYSTEMS), others try to investigate the biogeochemistry in grassland and forest ecosystems (BEGIN, BioCycle), the landscape and community ecology of biodiversity changes (ASSEMBLE, AGRIPOPES, EcoTRADE), and others focus on the diversity in freshwater (BIOPOOL, MOLARCH). In 2009, the EuroDIVERSITY Programme will integrate the different European research teams involved with collaborative field work campaigns over Europe, international workshops and conferences, as well as joint peer-review publications. For more information about the Programme and its activities, please check the Programme website: www.esf.org/eurodiversity

  9. Next-Generation A/D Sampler ADS3000+ for VLBI2010

    NASA Technical Reports Server (NTRS)

    Takefuji, Kazuhiro; Takeuchi, Hiroshi; Tsutsumi, Masanori; Koyama, Yasuhiro

    2010-01-01

    A high-speed A/D sampler, called ADS3000+, has been developed in 2008, which can sample one analog signal up to 4 Gbps to versatile Linux PC. After A/D conversion, the ADS3000+ can perform digital signal processing such as real-time DBBC (Digital Base Band Conversion) and FIR filtering such as simple CW RFI filtering using the installed FPGAs. A 4 Gsps fringe test with the ADS3000+ has been successfully performed. The ADS3000+ will not exclusively be used for VLBI but will also be employed in other applications.

  10. Conflict or Cooperation: The Use of Backchannelling in ELF Negotiations

    ERIC Educational Resources Information Center

    Bjorge, Anne Kari

    2010-01-01

    The international business community relies heavily on English Lingua Franca (ELF) as a shared means of communication, and English business language programmes thus feature prominently within the field of English for Specific Purposes (ESP). Business ESP programmes, however, have little focus on active listening, which previous research has…

  11. Effectiveness of Alcohol Media Literacy Programmes: A Systematic Literature Review

    ERIC Educational Resources Information Center

    Hindmarsh, Chloe S.; Jones, Sandra C.; Kervin, Lisa

    2015-01-01

    Alcohol media literacy is an emerging field that aims to address the link between exposure to alcohol advertising and subsequent expectancies and behaviours for children and adolescents. The design, rigour and results of alcohol media literacy programmes vary considerably, resulting in a number of unanswered questions about effectiveness. To…

  12. In-Service Training Programmes for Inclusive Education in Serbia--Offer and Implementation

    ERIC Educational Resources Information Center

    Matovic, Nataša; Spasenovic, Vera

    2015-01-01

    The initial education and in-service training of all educators, particularly teachers, play a vital role in strengthening competences necessary for implementing inclusive educational practice. This paper analyses offered and implemented inservice training programmes for educators in the field of inclusive education or, more precisely, for working…

  13. Students' Evaluation of Their English Language Learning Experience

    ERIC Educational Resources Information Center

    Maizatulliza, M.; Kiely, R.

    2017-01-01

    In the field of English language teaching and learning, there is a long history of investigating students' performance while they are undergoing specific learning programmes. This research study, however, focused on students' evaluation of their English language learning experience after they have completed their programme. The data were gathered…

  14. Data from: Retrospective analysis of a classical biological control programme

    USDA-ARS?s Scientific Manuscript database

    This database contains the raw data for the publication entitled Naranjo, S.E. 2018. Retrospective analysis of a classical biological control programme. Journal of Applied Ecology https://doi.org/10.1111/1365-2664.13163. Specific data include field-based, partial life table data for immature stage...

  15. Towards an Ethics of "Research Programmes" in Special Education

    ERIC Educational Resources Information Center

    Hausstatter, Rune Sarromaa; Connolley, Steven

    2007-01-01

    This article presents an analysis of the different perspectives and ideologies within the evolving field of special education research. This examination has claimed that Imre Lakatos' notion of "research programmes", which allows for a plurality of directions of research, provides a valuable guide for understanding the development and current…

  16. Making Sense of Learning: Insights from an Experientially-Based Undergraduate Entrepreneurship Programme

    ERIC Educational Resources Information Center

    Blackwood, Tony; Round, Anna; Pugalis, Lee; Hatt, Lucy

    2015-01-01

    Entrepreneurial learning is complex, reflecting the distinctive dispositions of entrepreneurs (including nascent entrepreneurs at an early stage in their entrepreneurial life course). The surge in entrepreneurship education programmes over recent decades and the attendant increase in scholarship have often contributed to this convoluted field.…

  17. Collaborations, Courses, and Competitions: Developing Entrepreneurship Programmes at UCL

    ERIC Educational Resources Information Center

    Chapman, David; Skinner, Jeff

    2006-01-01

    Purpose: This paper aims to detail a range of collaborative programmes developed by University College London (UCL) and the London Business School (LBS). These schemes have been developed to exploit synergies between the two institutions with the aim of promoting entrepreneurship within the fields of science and technology.…

  18. `Discover, Understand, Implement, and Transfer': Effectiveness of an intervention programme to motivate students for science

    NASA Astrophysics Data System (ADS)

    Schütte, Kerstin; Köller, Olaf

    2015-09-01

    Considerable research has focused on how best to satisfy modern societies' needs for skilled labour in the field of science. The present study evaluated an intervention programme designed to increase secondary school students' motivation to pursue a science career. Students from 3 schools of the highest educational track participated for up to 2 years in the intervention programme, which was implemented as an elective in the school curriculum. Our longitudinal study design for evaluating the effectiveness of the intervention programme included all students at the grade levels involved in the programme with students who did not participate serving as a control group. Mixed-model analyses of variance showed none of the intended effects of the intervention programme on science motivation; latent growth models corroborated these results. When the programme began, students who enrolled in the science elective (n = 92) were already substantially more motivated than their classmates (n = 228). Offering such an intervention programme as an elective did not further increase the participating students' science motivation. It seems worthwhile to carry out intervention programmes with talented students who show (comparatively) little interest in science at the outset rather than with highly motivated students who self-select into the programme.

  19. Protecting drinking water: water quality testing and PHAST in South Africa.

    PubMed

    Breslin, E D

    2000-01-01

    The paper presents an innovative field-based programme that uses a simple total coliform test and the approach of PHAST (Participatory Hygiene And Sanitation Transformation) to help communities exploring possible water quality problems and actions that can be taken to address them. The Mvula Trust, a South African water and environmental sanitation NGO, has developed the programme. It is currently being tested throughout South Africa. The paper provides two case studies on its implementation in the field, and suggests ways in which the initiative can be improved in the future.

  20. The Catalonian Expert Patient Programme for Chagas Disease: An Approach to Comprehensive Care Involving Affected Individuals.

    PubMed

    Claveria Guiu, Isabel; Caro Mendivelso, Johanna; Ouaarab Essadek, Hakima; González Mestre, Maria Asunción; Albajar-Viñas, Pedro; Gómez I Prat, Jordi

    2017-02-01

    The Catalonian Expert Patient Programme on Chagas disease is a initiative, which is part of the Chronic Disease Programme. It aims to boost responsibility of patients for their own health and to promote self-care. The programme is based on nine sessions conducted by an expert patient. Evaluation was focusing in: habits and lifestyle/self-care, knowledge of disease, perception of health, self-esteem, participant satisfaction, and compliance with medical follow-up visits. Eighteen participants initiated the programme and 15 completed it. The participants were Bolivians. The 66.7 % of them had been diagnosed with chagas disease in Spain. The 100 % mentioned that they would participate in this activity again and would recommend it to family and friends. The knowledge about disease improve after sessions. The method used in the programme could serve as a key strategy in the field of comprehensive care for individuals with this disease.

  1. An Intensive Programme on Education for Sustainable Development: The Participants' Experience

    ERIC Educational Resources Information Center

    Biasutti, Michele

    2015-01-01

    This paper presents the framework of an intensive programme (IP) organised by UNESCO and addressed to young graduate professionals to prepare them for a career in fields related to sustainability. The aims of the IP were to address participants' environmental awareness and to develop attitudes and skills related to environmental planning and…

  2. Backgrounder: The MAB Programme.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Paris (France). Office of Public Information.

    The Man and the Biosphere Programme (MAB) was launched in November 1971 under the auspices of Unesco. Its aim is to help to develop scientific knowledge with a view to the rational management and conservation of natural resources, to train qualified personnel in this field, and to disseminate the knowledge acquired both to the decision-makers and…

  3. Field-Based Learning: The Challenge of Practising Participatory Knowledge

    ERIC Educational Resources Information Center

    Morrissey, John; Clavin, Alma; Reilly, Kathy

    2013-01-01

    In 2009, Geography at National University of Ireland, Galway, launched a new taught master's programme, the MA in Environment, Society and Development. The vision for the programme was to engage students in the analysis and critique of the array of interventionary practices of development and securitization in our contemporary world. A range of…

  4. Marketing University Programmes in China: Innovative Experience in Executive and Professional Education

    ERIC Educational Resources Information Center

    Liu, Ning Rong; Crossley, Michael

    2010-01-01

    This article addresses the limited amount of research in the realm of programme marketing in the Chinese higher education sector. Original field research examines the emergence of marketing principles and strategies with specific reference to the experience of three higher education institutions in China. The development and promotion of executive…

  5. Establishing a Portfolio Assessment Framework for Pre-Service Teachers: A Multiple Perspectives Approach

    ERIC Educational Resources Information Center

    Denney, Maria K.; Grier, Jeanne M.; Buchanan, Merilyn

    2012-01-01

    In the field of initial teacher training, portfolios are widely used to assess pre-service teachers' performance as well as the outcomes of university-based teacher preparation programmes. However, little is known about the explicit design of portfolio assessment mechanisms in teacher preparation programmes. Issues related to the design and…

  6. Alternative Placements in Initial Teacher Education: An Evaluation

    ERIC Educational Resources Information Center

    Purdy, Noel; Gibson, Ken

    2008-01-01

    The paper evaluates a programme of short alternative placements for final-year B.Ed. students in Northern Ireland, which aims to broaden student teachers' experience and develop their transferable skills. The alternative placement programme is set first in an international context of evolving pre-service field placements and then set in a local…

  7. Entrepreneurship for Bioscience Researchers: A Case Study of an Entrepreneurship Programme

    ERIC Educational Resources Information Center

    Heinonen, Jarna; Poikkijoki, Sari-Anne; Vento-Vierikko, Irma

    2007-01-01

    Entrepreneurship is reaching new areas in which the concept of business is more or less unfamiliar and remote. This study focuses on a specific entrepreneurship education programme in the fields of chemistry, physics, information technology and bioinformatics, life sciences and medicine development. The aim is to gain a deeper understanding of the…

  8. Development of e-Career Guidance Programme for Secondary Schools in Akwa Ibom State

    ERIC Educational Resources Information Center

    John, Imitoro E.; Udofia, Nsikak-Abasi; Udoh, Nsisong A.; Anagbogu, Mercy A.

    2016-01-01

    This study developed and field tested an electronic career guidance package for secondary schools, the e-Career Guidance System. The study was an educational research and development study and thus utilised the instrumentation research design. The formative evaluation of the developed programme was carried out using the pretest-posttest…

  9. European Association of Echocardiography: Research Grant Programme.

    PubMed

    Gargani, Luna; Muraru, Denisa; Badano, Luigi P; Lancellotti, Patrizio; Sicari, Rosa

    2012-01-01

    The European Society of Cardiology (ESC) offers a variety of grants/fellowships to help young professionals in the field of cardiological training or research activities throughout Europe. The number of grants has significantly increased in recent years with contributions from the Associations, Working Groups and Councils of the ESC. The European Association of Echocardiography (EAE) is a registered branch of the ESC and actively takes part in this initiative. One of the aims of EAE is to promote excellence in research in cardiovascular ultrasound and other imaging modalities in Europe. Therefore, since 2008, the EAE offers a Research Grant Programme to help young doctors to obtain research experience in a high standard academic centre (or similar institution oriented to clinical or pre-clinical research) in an ESC member country other than their own. This programme can be considered as a valorization of the geographical mobility as well as cultural exchanges and professional practice in the field of cardiovascular imaging. The programme has been very successful so far, therefore in 2012 the EAE has increased its offer to two grants of 25,000 euros per annum each.

  10. [Big differences in leadership and management training within health care services. Leadership and issues concerning cooperation should be more emphasized in basic medical education].

    PubMed

    Hauptig, S; Collste, L; Hammar, M; Calltorp, J; Frischer, J; Haase, H; Lindquist, I; Andersson, C

    1999-12-08

    A recent survey of medical management programmes at universities across the country showed manifest national differences to exist, both quantitative and qualitative. Using a questionnaire, the Swedish Society of Medical Management examined the programmes for physiotherapists, occupational therapists, social workers, nurses and physicians, with respect to such issues as leadership, self-awareness and communication, health economics, and administration. It was concluded that knowledge acquired differs between fields; that physiotherapy programmes tend to have a very didactic approach; that nurses are taught the importance of participation in developmental processes; that doctors are exposed to somewhat the same approach but to a large extent on a voluntary basis; and that social workers obtain good insight into the administrative skills necessary to their work. In the article it is concluded that students would benefit from orientation in the diverse approaches used in the other fields than their own, and that pooling of resources among different programmes might be a more economic alternative to current practice.

  11. ESF EUROCORES Programmes In Geosciences And Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Jonckheere, I. G.

    2007-12-01

    In close cooperation with its Member Organisations, the European Science Foundation (ESF) has launched since late 2003 a series of European Collaborative Research (EUROCORES) Programmes. Their aim is to enable researchers in different European countries to develop cooperation and scientific synergy in areas where European scale and scope are required in a global context. The EUROCORES Scheme provides an open, flexible and transparent framework that allows national science funding and science performing agencies to join forces to support excellent European-led research, following a selection among many science-driven suggestions for new Programmes themes submitted by the scientific community. The EUROCORES instrument represents the first large scale attempt of national research (funding) agencies to act together against fragmentation, asynchronicity and duplication of research (funding) within Europe. There are presently 7 EUROCORES Programmes specifically dealing with cutting edge science in the fields of Earth, Climate and Environmental Sciences. The EUROCORES Programmes consist of a number of international, multidisciplinary collaborative research projects running for 3-4 years, selected through independent peer review. Under the overall responsibility of the participating funding agencies, those projects are coordinated and networked together through the scientific guidance of a Scientific Committee, with the support of a Programme Coordinator, responsible at ESF for providing planning, logistics, and the integration and dissemination of science. Strong links are aimed for with other major international programmes and initiatives worldwide. In this framework, linkage to IYPE would be of major interest for the scientific communities involved. Each Programme mobilises 5 to 13 million Euros in direct science funding from 9 to 27 national agencies from 8 to 20 countries. Additional funding for coordination, networking and dissemination is allocated by the ESF through these distinctive research initiatives, to build on the national research efforts and contribute to the capacity building, in relation with typically about 15-20 post-doc positions and/or PhD studentships supported nationally within each Programme. Typical networking activities are topical workshops, open sessions in a larger conference, Programme conference, (summer / winter) schools, exchange visits across projects or programmes. Overall, EUROCORES Programmes are supported by more than 60 national agencies from 30 countries and by the European Science Foundation (ESF) with support by the European Commission, DG Research (Sixth Framework Programme, contract ERAS-CT-2003-980409). In the framework of AGU, a series of present EUROCORES Programmes in the field of Geosciences and Environmental Sciences are presented (e.g., EuroDIVERSITY, EuroDEEP, EUROMARGINS, EuroCLIMATE, and EuroMinScI).

  12. DMD-based programmable wide field spectrograph for Earth observation

    NASA Astrophysics Data System (ADS)

    Zamkotsian, Frédéric; Lanzoni, Patrick; Liotard, Arnaud; Viard, Thierry; Costes, Vincent; Hébert, Philippe-Jean

    2015-03-01

    In Earth Observation, Universe Observation and Planet Exploration, scientific return could be optimized in future missions using MOEMS devices. In Earth Observation, we propose an innovative reconfigurable instrument, a programmable wide-field spectrograph where both the FOV and the spectrum could be tailored thanks to a 2D micromirror array (MMA). For a linear 1D field of view (FOV), the principle is to use a MMA to select the wavelengths by acting on intensity. This component is placed in the focal plane of a first grating. On the MMA surface, the spatial dimension is along one side of the device and for each spatial point, its spectrum is displayed along the perpendicular direction: each spatial and spectral feature of the 1D FOV is then fully adjustable dynamically and/or programmable. A second stage with an identical grating recomposes the beam after wavelengths selection, leading to an output tailored 1D image. A mock-up has been designed, fabricated and tested. The micromirror array is the largest DMD in 2048 x 1080 mirrors format, with a pitch of 13.68μm. A synthetic linear FOV is generated and typical images have been recorded o at the output focal plane of the instrument. By tailoring the DMD, we could modify successfully each pixel of the input image: for example, it is possible to remove bright objects or, for each spatial pixel, modify the spectral signature. The very promising results obtained on the mock-up of the programmable wide-field spectrograph reveal the efficiency of this new instrument concept for Earth Observation.

  13. Lessons from the evaluation of the UK's NHS R&D Implementation Methods Programme

    PubMed Central

    Soper, Bryony; Hanney, Stephen R

    2007-01-01

    Background Concern about the effective use of research was a major factor behind the creation of the NHS R&D Programme in 1991. In 1994, an advisory group was established to identify research priorities in research implementation. The Implementation Methods Programme (IMP) flowed from this, and its commissioning group funded 36 projects. In 2000 responsibility for the programme passed to the National Co-ordinating Centre for NHS Service Delivery and Organisation R&D, which asked the Health Economics Research Group (HERG), Brunel University, to conduct an evaluation in 2002. By then most projects had been completed. This evaluation was intended to cover: the quality of outputs, lessons to be learnt about the communication strategy and the commissioning process, and the benefits from the projects. Methods We adopted a wide range of quantitative and qualitative methods. They included: documentary analysis, interviews with key actors, questionnaires to the funded lead researchers, questionnaires to potential users, and desk analysis. Results Quantitative assessment of outputs and dissemination revealed that the IMP funded useful research projects, some of which had considerable impact against the various categories in the HERG payback model, such as publications, further research, research training, impact on health policy, and clinical practice. Qualitative findings from interviews with advisory and commissioning group members indicated that when the IMP was established, implementation research was a relatively unexplored field. This was reflected in the understanding brought to their roles by members of the advisory and commissioning groups, in the way priorities for research were chosen and developed, and in how the research projects were commissioned. The ideological and methodological debates associated with these decisions have continued among those working in this field. The need for an effective communication strategy for the programme as a whole was particularly important. However, such a strategy was never developed, making it difficult to establish the general influence of the IMP as a programme. Conclusion Our findings about the impact of the work funded, and the difficulties faced by those developing the IMP, have implications for the development of strategic programmes of research in general, as well as for the development of more effective research in this field. PMID:17309803

  14. Barriers to community case management of malaria in Saraya, Senegal: training, and supply-chains.

    PubMed

    Blanas, Demetri A; Ndiaye, Youssoupha; Nichols, Kim; Jensen, Andrew; Siddiqui, Ammar; Hennig, Nils

    2013-03-14

    Health workers in sub-Saharan Africa can now diagnose and treat malaria in the field, using rapid diagnostic tests and artemisinin-based combination therapy in areas without microscopy and widespread resistance to previously effective drugs. This study evaluates communities' perceptions of a new community case management of malaria programme in the district of Saraya, south-eastern Senegal, the effectiveness of lay health worker trainings, and the availability of rapid diagnostic tests and artemisinin-based combination therapy in the field. The study employed qualitative and quantitative methods including focus groups with villagers, and pre- and post-training questionnaires with lay health workers. Communities approved of the community case management programme, but expressed concern about other general barriers to care, particularly transportation challenges. Most lay health workers acquired important skills, but a sizeable minority did not understand the rapid diagnostic test algorithm and were not able to correctly prescribe arteminisin-based combination therapy soon after the training. Further, few women lay health workers participated in the programme. Finally, the study identified stock-outs of rapid tests and anti-malaria medication products in over half of the programme sites two months after the start of the programme, thought due to a regional shortage. This study identified barriers to implementation of the community case management of malaria programme in Saraya that include lay health worker training, low numbers of women participants, and generalized stock-outs. These barriers warrant investigation into possible solutions of relevance to community case management generally.

  15. Farmer and Veterinarian Attitudes towards the Bovine Tuberculosis Eradication Programme in Spain: What Is Going on in the Field?

    PubMed

    Ciaravino, Giovanna; Ibarra, Patricia; Casal, Ester; Lopez, Sergi; Espluga, Josep; Casal, Jordi; Napp, Sebastian; Allepuz, Alberto

    2017-01-01

    The effectiveness of health interventions against bovine tuberculosis (bTB) is influenced by several " non-biological " factors that may hamper bTB detection and control. Although the engagement of stakeholders is a key factor for the eradication programme's success, social factors have been often ignored in the control programmes of animal diseases, especially in developed countries. In this study, we used a qualitative approach to investigate perceptions, opinions, attitudes, and beliefs of farmers, and veterinarians who may influence the effectiveness of the Spanish bTB eradication programme. The study was carried out in two phases. First, 13 key representatives of different groups involved in the programme were interviewed through exploratory interviews to identify most relevant themes circulating in the population. Interviews focused on strong and weak points of the programme; reasons for failure to achieve eradication; benefits of being disease free; future perspectives, and proposed changes to the programme. Based on these results, a thematic guide was developed and detailed information was gained through face-to-face in-depth interviews conducted on a purposive sample of 39 farmers and veterinarians. Data were analysed following an ethnographic methodology. Main results suggested that the bTB programme is perceived as a law enforcement duty without an adequate motivation of some stakeholders and a general feeling of distrust arose. The complexity of bTB epidemiology combined with gaps in knowledge and weak communication throughout stakeholders contributed to causing disbeliefs, which in turn generated different kinds of guesses and interpretations. Low reliability in the routine skin test for bTB screening was expressed and the level of confidence on test results interpretation was linked with skills and experience of public and private veterinarians in the field. Lack of training for farmers and pressure faced by veterinarians during field activities also emerged. Few benefits of being bTB free were perceived and comparative grievances referred to wildlife and other domestic reservoirs, sector-specific legislation for bullfighting farms, and the absence of specific health legislation for game hunting farms were reported. Understanding reasons for demotivation and scepticism may help institutions to ensure stakeholders' collaboration and increase the acceptability of control measures leading to an earlier achievement of eradication.

  16. Canaries in the coal mine: Interpersonal violence, gang violence, and violent extremism through a public health prevention lens.

    PubMed

    Eisenman, David P; Flavahan, Louise

    2017-08-01

    This paper asks what programmes and policies for preventing violent extremism (also called 'countering violent extremism', or CVE) can learn from the public health violence prevention field. The general answer is that addressing violent extremism within the wider domain of public health violence prevention connects the effort to a relevant field of research, evidence-based policy and programming, and a broader population reach. This answer is reached by examining conceptual alignments between the two fields at both the case-level and the theoretical level. To address extremist violence within the wider reach of violence prevention, having a shared model is seen as a first step. The World Health Organization uses the social-ecological framework for assessing the risk and protective factors for violence and developing effective public-health based programmes. This study illustrates how this model has been used for gang violence prevention and explores overlaps between gang violence prevention and preventing violent extremism. Finally, it provides policy and programme recommendations to align CVE with public health violence prevention.

  17. Programmable Colored Illumination Microscopy (PCIM): A practical and flexible optical staining approach for microscopic contrast enhancement

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Sun, Jiasong; Feng, Shijie; Hu, Yan; Chen, Qian

    2016-03-01

    Programmable colored illumination microscopy (PCIM) has been proposed as a flexible optical staining technique for microscopic contrast enhancement. In this method, we replace the condenser diaphragm of a conventional microscope with a programmable thin film transistor-liquid crystal display (TFT-LCD). By displaying different patterns on the LCD, numerous established imaging modalities can be realized, such as bright field, dark field, phase contrast, oblique illumination, and Rheinberg illuminations, which conventionally rely on intricate alterations in the respective microscope setups. Furthermore, the ease of modulating both the color and the intensity distribution at the aperture of the condenser opens the possibility to combine multiple microscopic techniques, or even realize completely new methods for optical color contrast staining, such as iridescent dark-field and iridescent phase-contrast imaging. The versatility and effectiveness of PCIM is demonstrated by imaging of several transparent colorless specimens, such as unstained lung cancer cells, diatom, textile fibers, and a cryosection of mouse kidney. Finally, the potentialities of PCIM for RGB-splitting imaging with stained samples are also explored by imaging stained red blood cells and a histological section.

  18. International education is a broken field: Can ubuntu education bring solutions?

    NASA Astrophysics Data System (ADS)

    Piper, Benjamin

    2016-02-01

    Ubuntu is an African philosophy of human kindness; applying it in the Global South would fundamentally alter the design of the education sector. This essay argues, however, that the field of international educational development is not, in fact, structured to support an education influenced by ubuntu ideals. Specifically, the educational development milieu includes donors, implementers and academicians who do not sufficiently question the power dynamics which underpin education development. This creates a field where the power imbalances between donors and host governments are not interrogated, where development workers place too much faith in their own knowledge rather than that of local education experts, and where development practitioners rarely appreciate the privilege of working in countries which are not their own. An ubuntu education would alter the educational development field in myriad critical ways, a few of which are suggested in this essay. Educational development programmes in universities and intake programmes for implementers and donors should teach officers humility, appreciating existing local talent and expertise. Donor programmes should incentivise reflective practice which formally embeds appreciation for local culture and expertise, thereby supporting structures which help educational development experts to review their metacognitive processes. The field should also dramatically increase the numbers of local, minority and female educational development practitioners and provide more avenues for advancement for those groups. These are activities which are critical to supporting the education development field, but require a fundamental change of attitude by practitioners to ensure the right kind of relationships between the West and the Global South.

  19. Comparison of Octopus Semi-Automated Kinetic Perimetry and Humphrey Peripheral Static Perimetry in Neuro-Ophthalmic Cases

    PubMed Central

    Rowe, Fiona J.; Noonan, Carmel; Manuel, Melanie

    2013-01-01

    Aim. To compare semikinetic perimetry (SKP) on Octopus 900 perimetry to a peripheral static programme with Humphrey automated perimetry. Methods. Prospective cross-section study comparing Humphrey full field (FF) 120 two zone programme to a screening protocol for SKP on Octopus perimetry. Results were independently graded for presence/absence of field defect plus type and location of defect. Results. 64 patients (113 eyes) underwent dual perimetry assessment. Mean duration of assessment for SKP was 4.54 minutes ±0.18 and 6.17 ± 0.12 for FF120 (P = 0.0001). 80% of results were correctly matched for normal or abnormal visual fields using the I4e target versus FF120, and 73.5% were correctly matched using the I2e target versus FF120. When comparing Octopus results with combined I4e and I2e isopters to the FF120 result, a match for normal or abnormal fields was recorded in 87%. Conclusions. Humphrey perimetry test duration was generally longer than Octopus SKP. In the absence of kinetic perimetry, peripheral static suprathreshold programme options such as FF120 may be useful for detection of visual field defects. However, statokinetic dissociation may occur. Octopus SKP utilising both I4e and I2e targets provides detailed information of both the defect depth and size and may provide a more representative view of the actual visual field defect. PMID:24558605

  20. Information barriers and social stratification in higher education: evidence from a field experiment.

    PubMed

    Abbiati, Giovanni; Argentin, Gianluca; Barone, Carlo; Schizzerotto, Antonio

    2017-11-29

    Our contribution assesses the role of information barriers for patterns of participation in Higher Education (HE) and the related social inequalities. For this purpose, we developed a large-scale clustered randomised experiment involving over 9,000 high school seniors from 62 Italian schools. We designed a counseling intervention to correct student misperceptions of the profitability of HE, that is, the costs, economic returns and chances of success of investments in different tertiary programs. We employed a longitudinal survey to test whether treated students' educational trajectories evolved differently relative to a control group. We find that, overall, treated students enrolled less often in less remunerative fields of study in favour of postsecondary vocational programmes. Most importantly, this effect varied substantially by parental social class and level of education. The shift towards vocational programmes was mainly due to the offspring of low-educated parents; in contrast, children of tertiary graduates increased their participation in more rewarding university fields. Similarly, the redistribution from weak fields to vocational programmes mainly involved the children of the petty bourgeoisie and the working class, while upper class students invested in more rewarding university fields. We argue that the status-maintenance model proposed by Breen and Goldthorpe can explain these socially differentiated treatment effects. Overall, our results challenge the claim that student misperceptions contribute to horizontal inequalities in access to HE. © London School of Economics and Political Science 2017.

  1. [The SGO Health Research Promotion Program. XIII. Evaluation of the section 'Addiction Research'].

    PubMed

    van Rees-Wortelboer, M M

    1999-01-02

    As a part of the SGO Health Research Promotion Programme a research programme on addiction research was realized. Aim of the programme was to strengthen and concentrate the Dutch research into addiction. Within the Amsterdam Institute for Addiction Research (AIAR), a structural collaboration between the Jellinek Treatment Centre for Addiction, the University of Amsterdam and the Academic Hospital of the University of Amsterdam, strategic research programmes were developed on the borderland of addiction and psychiatry, notably 'Clinical epidemiology addiction' and 'Developmental disorders, addiction and psychotraumas'. The institution of a co-ordinating platform of research groups conducting socio-epidemiological addiction research improved the co-ordination of research lines in this field.

  2. Association of strategic management with vaccination in the terms of globalization.

    PubMed

    Rabrenovic, Mihajlo; Cukanovic Karavidic, Marija; Stosic, Ivana

    2018-04-01

    Globalization is having an ever growing impact on the field of vaccine production and distribution in the world and domestically. In this article we examine the impact of taking a strategic approach to vaccination programmes by all the relevant actors: WHO, UNICEF, national immunization programmes, and vaccine manufacturers and distributors. The review of the relevant literature indicates that there are commonalities to the worldwide vaccination programmes. A comparative analysis of various vaccination strategies recommended by WHO and the immunization calendars of certain European countriesis made as well as an analysis of the Serbian vaccination programme. New and more expensive vaccines will continue to appear on the market in increasingly short periods of time.

  3. Feasibility of an experiential community garden and nutrition programme for youth living in public housing.

    PubMed

    Grier, Karissa; Hill, Jennie L; Reese, Felicia; Covington, Constance; Bennette, Franchennette; MacAuley, Lorien; Zoellner, Jamie

    2015-10-01

    Few published community garden studies have focused on low socio-economic youth living in public housing or used a community-based participatory research approach in conjunction with youth-focused community garden programmes. The objective of the present study was to evaluate the feasibility (i.e. demand, acceptability, implementation and limited-effectiveness testing) of a 10-week experiential theory-based gardening and nutrition education programme targeting youth living in public housing. In this mixed-methods feasibility study, demand and acceptability were measured using a combination of pre- and post-programme surveys and interviews. Implementation was measured via field notes and attendance. Limited-effectiveness was measured quantitatively using a pre-post design and repeated-measures ANOVA tests. Two public housing sites in the Dan River Region of south central Virginia, USA. Forty-three youth (primarily African American), twenty-five parents and two site leaders. The positive demand and acceptability findings indicate the high potential of the programme to be used and be suitable for the youth, parents and site leaders. Field notes revealed numerous implementation facilitators and barriers. Youth weekly attendance averaged 4·6 of 10 sessions. Significant improvements (P<0·05) were found for some (e.g. fruit and vegetable asking self-efficacy, overall gardening knowledge, knowledge of MyPlate recommendations), but not all limited-effectiveness measures (e.g. willingness to try fruits and vegetables, fruit and vegetable eating self-efficacy). This community-based participatory research study demonstrates numerous factors that supported and threatened the feasibility of a gardening and nutrition programme targeting youth in public housing. Lessons learned are being used to adapt and strengthen the programme for future efforts targeting fruit and vegetable behaviours.

  4. High-performance reconfigurable coincidence counting unit based on a field programmable gate array.

    PubMed

    Park, Byung Kwon; Kim, Yong-Su; Kwon, Osung; Han, Sang-Wook; Moon, Sung

    2015-05-20

    We present a high-performance reconfigurable coincidence counting unit (CCU) using a low-end field programmable gate array (FPGA) and peripheral circuits. Because of the flexibility guaranteed by the FPGA program, we can easily change system parameters, such as internal input delays, coincidence configurations, and the coincidence time window. In spite of a low-cost implementation, the proposed CCU architecture outperforms previous ones in many aspects: it has 8 logic inputs and 4 coincidence outputs that can measure up to eight-fold coincidences. The minimum coincidence time window and the maximum input frequency are 0.47 ns and 163 MHz, respectively. The CCU will be useful in various experimental research areas, including the field of quantum optics and quantum information.

  5. Note: The design of thin gap chamber simulation signal source based on field programmable gate array.

    PubMed

    Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge

    2015-01-01

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  6. Reprogrammable field programmable gate array with integrated system for mitigating effects of single event upsets

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong (Inventor); Herath, Jeffrey A. (Inventor)

    2010-01-01

    An integrated system mitigates the effects of a single event upset (SEU) on a reprogrammable field programmable gate array (RFPGA). The system includes (i) a RFPGA having an internal configuration memory, and (ii) a memory for storing a configuration associated with the RFPGA. Logic circuitry programmed into the RFPGA and coupled to the memory reloads a portion of the configuration from the memory into the RFPGA's internal configuration memory at predetermined times. Additional SEU mitigation can be provided by logic circuitry on the RFPGA that monitors and maintains synchronized operation of the RFPGA's digital clock managers.

  7. Note: The design of thin gap chamber simulation signal source based on field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Kun; Wang, Xu; Li, Feng

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  8. Field programmable gate array-assigned complex-valued computation and its limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com; Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien; Zwick, Wolfgang

    We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.

  9. A control system based on field programmable gate array for papermaking sewage treatment

    NASA Astrophysics Data System (ADS)

    Zhang, Zi Sheng; Xie, Chang; Qing Xiong, Yan; Liu, Zhi Qiang; Li, Qing

    2013-03-01

    A sewage treatment control system is designed to improve the efficiency of papermaking wastewater treatment system. The automation control system is based on Field Programmable Gate Array (FPGA), coded with Very-High-Speed Integrate Circuit Hardware Description Language (VHDL), compiled and simulated with Quartus. In order to ensure the stability of the data used in FPGA, the data is collected through temperature sensors, water level sensor and online PH measurement system. The automatic control system is more sensitive, and both the treatment efficiency and processing power are increased. This work provides a new method for sewage treatment control.

  10. Radiation Hardened Electronics for Extreme Environments

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Watson, Michael D.

    2007-01-01

    The Radiation Hardened Electronics for Space Environments (RHESE) project consists of a series of tasks designed to develop and mature a broad spectrum of radiation hardened and low temperature electronics technologies. Three approaches are being taken to address radiation hardening: improved material hardness, design techniques to improve radiation tolerance, and software methods to improve radiation tolerance. Within these approaches various technology products are being addressed including Field Programmable Gate Arrays (FPGA), Field Programmable Analog Arrays (FPAA), MEMS Serial Processors, Reconfigurable Processors, and Parallel Processors. In addition to radiation hardening, low temperature extremes are addressed with a focus on material and design approaches.

  11. Self-Adaptive System based on Field Programmable Gate Array for Extreme Temperature Electronics

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Zebulum, Ricardo; Rajeshuni, Ramesham; Stoica, Adrian; Katkoori, Srinivas; Graves, Sharon; Novak, Frank; Antill, Charles

    2006-01-01

    In this work, we report the implementation of a self-adaptive system using a field programmable gate array (FPGA) and data converters. The self-adaptive system can autonomously recover the lost functionality of a reconfigurable analog array (RAA) integrated circuit (IC) [3]. Both the RAA IC and the self-adaptive system are operating in extreme temperatures (from 120 C down to -180 C). The RAA IC consists of reconfigurable analog blocks interconnected by several switches and programmable by bias voltages. It implements filters/amplifiers with bandwidth up to 20 MHz. The self-adaptive system controls the RAA IC and is realized on Commercial-Off-The-Shelf (COTS) parts. It implements a basic compensation algorithm that corrects a RAA IC in less than a few milliseconds. Experimental results for the cold temperature environment (down to -180 C) demonstrate the feasibility of this approach.

  12. A programmable controller based on CAN field bus embedded microprocessor and FPGA

    NASA Astrophysics Data System (ADS)

    Cai, Qizhong; Guo, Yifeng; Chen, Wenhei; Wang, Mingtao

    2008-10-01

    One kind of new programmable controller(PLC) is introduced in this paper. The advanced embedded microprocessor and Field-Programmable Gate Array (FPGA) device are applied in the PLC system. The PLC system structure was presented in this paper. It includes 32 bits Advanced RISC Machines (ARM) embedded microprocessor as control core, FPGA as control arithmetic coprocessor and CAN bus as data communication criteria protocol connected the host controller and its various extension modules. It is detailed given that the circuits and working principle, IiO interface circuit between ARM and FPGA and interface circuit between ARM and FPGA coprocessor. Furthermore the interface circuit diagrams between various modules are written. In addition, it is introduced that ladder chart program how to control the transfer info of control arithmetic part in FPGA coprocessor. The PLC, through nearly two months of operation to meet the design of the basic requirements.

  13. A flexible 32-channel time-to-digital converter implemented in a Xilinx Zynq-7000 field programmable gate array

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Kuang, Jie; Liu, Chong; Cao, Qiang; Li, Deng

    2017-03-01

    A high performance multi-channel time-to-digital converter (TDC) is implemented in a Xilinx Zynq-7000 field programmable gate array (FPGA). It can be flexibly configured as either 32 TDC channels with 9.9 ps time-interval RMS precision, 16 TDC channels with 6.9 ps RMS precision, or 8 TDC channels with 5.8 ps RMS precision. All TDCs have a 380 M Samples/second measurement throughput and a 2.63 ns measurement dead time. The performance consistency and temperature dependence of TDC channels are also evaluated. Because Zynq-7000 FPGA family integrates a feature-rich dual-core ARM based processing system and 28 nm Xilinx programmable logic in a single device, the realization of high performance TDCs on it will make the platform more widely used in time-measuring related applications.

  14. Studies in Mindfulness: Widening the Field for All Involved in Pastoral Care

    ERIC Educational Resources Information Center

    Nixon, Graeme; McMurtry, David; Craig, Linda; Nevejan, Annick; Regan-Addis, Heather

    2016-01-01

    Since 2010, the University of Aberdeen, Scotland, UK, has offered an MSc in studies in mindfulness degree programme within its School of Education. The programme has attracted over 200 students from multiple professional contexts, providing the authors with the opportunity to gather and analyse demographic data, as well as data regarding student…

  15. Lichen elements as pollution indicators: evaluation of methods for large monitoring programmes

    Treesearch

    Susan Will-Wolf; Sarah Jovan; Michael C. Amacher

    2017-01-01

    Lichen element content is a reliable indicator for relative air pollution load in research and monitoring programmes requiring both efficiency and representation of many sites. We tested the value of costly rigorous field and handling protocols for sample element analysis using five lichen species. No relaxation of rigour was supported; four relaxed protocols generated...

  16. The International School Effectiveness Research Programme ISERP. First Results of the Quantitative Study.

    ERIC Educational Resources Information Center

    Creemers, Bert P. M.; And Others

    The International School Effectiveness Research Programme (ISERP) is an example of the exchange of research and research results in the field of educational effectiveness. It aims to build on existing models of good practice and to avoid the variations in approach that limit the transferability of data within and between countries. A number of…

  17. Changes and Challenges in Music Education: Reflections on a Norwegian Arts-in-Education Programme

    ERIC Educational Resources Information Center

    Christophersen, Catharina

    2015-01-01

    With a recent research study on a Norwegian arts-in-education programme "The Cultural Rucksack" as its starting point, this article addresses policy changes in the fields of culture and education and possible implications these could have on music education in schools. Familiar debates on the quality of education and the political…

  18. Multiplying a Force for Good? the Impact of Security Sector Management Postgraduate Education in Ethiopia

    ERIC Educational Resources Information Center

    Macphee, Paula-Louise; Fitz-Gerald, Ann

    2014-01-01

    This paper argues for the importance, benefits and wider impact of a donor-funded, locally supported postgraduate programme in security sector management (SSM) for government officials in Ethiopia. With the exception of specialised education and training programmes within the field of peace and conflict studies, the role of education in…

  19. Can an International Field Experience Assist Health and Physical Education Pre-Service Teachers to Develop Cultural Competency?

    ERIC Educational Resources Information Center

    Winslade, Matthew

    2016-01-01

    An emerging focus of teacher education courses within countries such as Australia centres on the development of cultural competency. An international practicum experience or student mobility programme embedded within pre-service teacher education programmes is one way to provide such an opportunity. In subject areas such as Health and Physical…

  20. The Cognitive, Social and Emotional Processes of Teacher Identity Construction in a Pre-Service Teacher Education Programme

    ERIC Educational Resources Information Center

    Yuan, Rui; Lee, Icy

    2015-01-01

    This research investigates how three Government-funded Normal Students constructed and reconstructed their identities in a pre-service teacher education programme in China. Drawing upon data from interviews, field observation and the pre-service teachers' written reflections, the study explores the cognitive, social and emotional processes of…

  1. Improving Practices in Early Childhood Classrooms in Pakistan: Issues and Challenges from the Field

    ERIC Educational Resources Information Center

    Juma, Audrey

    2004-01-01

    This article focuses on an early childhood programme that has been initiated by the Institute for Educational Development at the Aga Khan University in Karachi, Pakistan. The programme is a Certificate in Education and involves training teachers so as to enable them to understand early childhood education and development, and to become effective…

  2. JTAG-based remote configuration of FPGAs over optical fibers

    DOE PAGES

    Deng, B.; Xu, H.; Liu, C.; ...

    2015-01-28

    In this study, a remote FPGA-configuration method based on JTAG extension over optical fibers is presented. The method takes advantage of commercial components and ready-to-use software such as iMPACT and does not require any hardware or software development. The method combines the advantages of the slow remote JTAG configuration and the fast local flash memory configuration. The method has been verified successfully and used in the Demonstrator of Liquid-Argon Trigger Digitization Board (LTDB) for the ATLAS liquid argon calorimeter Phase-I trigger upgrade. All components on the FPGA side are verified to meet the radiation tolerance requirements.

  3. The European space exploration programme: current status of ESA's plans for Moon and Mars exploration.

    PubMed

    Messina, Piero; Vennemann, Dietrich

    2005-01-01

    After a large consultation with the scientific and industrial communities in Europe, the Aurora Space Exploration Programme was unanimously approved at the European Space Agency (ESA) Council at ministerial level in Edinburgh in 2001. This marked the start of the programme's preparation phase that was due to finish by the end of 2004. Aurora features technology development robotic and crewed rehearsal missions aimed at preparing a human mission to Mars by 2033. Due to the evolving context, both international and European, ESA has undertaken a review of the goals and approach of its exploration programme. While maintaining the main robotic missions that had been conceived during Aurora, the European Space Exploration Programme that is currently being proposed to the Aurora participating states and other ESA Member States has a reviewed approach and will feature a greater synergy with other ESA programmes. The paper will present the process that led to the revision of ESA's plans in the field of exploration and will give the current status of the programme. c2005 Published by Elsevier Ltd.

  4. Programmable spectrometer using MOEMs devices for space applications

    NASA Astrophysics Data System (ADS)

    Viard, Thierry; Buisset, Christophe; Rejeaunier, Xavier; Zamkotsian, Frédéric; Venancio, Luis M.

    2017-11-01

    A new class of spectrometer can be designed using programmable components such as MOEMS which enable to tune the beam in spectral width and central wavelength. It becomes possible to propose for space applications a spectrometer with programmable resolution and adjustable spectral bandwidth. The proposed way to tune the output beam is to use the diffraction effect with the so-called PMDG (Programmable Micro Diffraction Gratings ) diffractive MEMS. In that case, small moving structures can form programmable gratings, diffracting or not the incoming light. In the proposed concept, the MOEMS is placed in the focal plane of a first diffracting stage (using a grating for instance). With such implementation, the MOEMS component can be used to select some wavelengths (for instance by reflecting them) and to switch-off the others (for instance by diffracting them). A second diffracting stage is used to recombine the beam composed by all the selected wavelengths. It becomes then possible to change and adjust the filter in λ and Δλ. This type of implementation is very interesting for space applications (Astronomy, Earth observation, planetary observation). Firstly because it becomes possible to tune the filtering function quasi instantaneously. And secondly because the focal plane dimension can be reduced to a single detector (for application without field of view) or to a linear detector instead of a 2D matrix detector (for application with field of view) thanks to a sequential acquisition of the signal.

  5. [The necessity and possibility of developing skills in daily living activities in children attending a special kindergarten for the physically handicapped--demonstrated by means of a five-year-old boy suffering from spastic hemiparesis (author's transl)].

    PubMed

    Burgheim-Raguss, B

    1980-02-01

    Within the framework of an empirical study carried out in a special kindergarten it was attempted to answer the question whether it is necessary and possible in such an institution to develop the children's skills in daily living activities. A six month systematic programme was set up for a five-year-old boy suffering from spastic hemiparesis which was designed to develop his skills in personal hygiene, and general behaviour in the kitchen area. In preparing the programme each of the two fields was first treated separately in detail, then the common factors taken into account. The programm's subdivision into an ultimate goal and two partial goals assisted the implementation of the eighteen training steps. A comparision of the knowledge of, and skills in, the two fields before and after the training showed that they had increased both in quantity and quality. As the boy still showed a headway over his peers - comparable in their disabilities - three years after completion of the programme as far as independence was concerned, it can be said that special training in daily living activities can and must be carried out in a special kindergarten for physically handicapped children provided the training is based on a specialized and fully structured programme.

  6. Effects of high-intensity power-frequency electric fields on implanted modern multiprogrammable cardiac pacemakers.

    PubMed

    Butrous, G S; Meldrum, S J; Barton, D G; Male, J C; Bonnell, J A; Camm, A J

    1982-05-01

    The effect on an implanted, multiprogrammable pacemaker of power-frequency (50 Hz) electric fields up to an intensity (unperturbed value measured at 1.7 m) of 20 kV/m were assessed in ten paced patients. Radiotelemetric monitoring of the electrocardiogram allowed supervision of the electrocardiogram throughout exposure to the alternating electric field. Displacement body currents of up to 300μA were achieved depending on the position and height of the patient. None of the pacemakers was inhibited, triggered or reverted to fixed rate operation during the exposure. The programmable functions, programmability or output characteristics were not affected. Small changes in cardiac rate and rhythm elicited the correct pacemaker responses. Unlike earlier models of pacemaker, this modern implanted pacemaker, which represents `the state of the art', is not affected by 50 Hz electric fields likely to be encountered when standing underneath power lines.

  7. Veggie Rx: an outcome evaluation of a healthy food incentive programme.

    PubMed

    Cavanagh, Michelle; Jurkowski, Janine; Bozlak, Christine; Hastings, Julia; Klein, Amy

    2017-10-01

    One challenge to healthy nutrition, especially among low-income individuals, is access to and consumption of fresh fruits and vegetables. To address this problem, Veggie Rx, a healthy food incentive programme, was established within a community clinic to increase access to fresh produce for low-income patients diagnosed with obesity, hypertension and/or type 2 diabetes. The current research aimed to evaluate Veggie Rx programme effectiveness. A retrospective pre/post design using medical records and programme data was used to evaluate the programme. The study was approved by the University of Albany Institutional Review Board and the Patient Interest Committee of a community clinic. The study was conducted in a low-income, urban neighbourhood in upstate New York. Medical record data and Veggie Rx programme data were analysed for fifty-four eligible participants. An equal-sized control group of patients who were not programme participants were matched on age, ethnicity and co-morbidity status. A statistically significant difference in mean BMI change (P=0·02) between the intervention and the control group was calculated. The intervention group had a mean decrease in BMI of 0·74 kg/m2. Greater improvement in BMI was found among Veggie Rx programme participants. This information will guide programme changes and inform the field on the effectiveness of healthy food incentive programmes for improving health outcomes for low-income populations.

  8. Veggie Rx: an outcome evaluation of a healthy food incentive programme

    PubMed Central

    Cavanagh, Michelle; Jurkowski, Janine; Bozlak, Christine; Hastings, Julia; Klein, Amy

    2017-01-01

    Objective One challenge to healthy nutrition, especially among low-income individuals, is access to and consumption of fresh fruits and vegetables. To address this problem, Veggie Rx, a healthy food incentive programme, was established within a community clinic to increase access to fresh produce for low-income patients diagnosed with obesity, hypertension and/or type 2 diabetes. The current research aimed to evaluate Veggie Rx programme effectiveness. Design A retrospective pre/post design using medical records and programme data was used to evaluate the programme. The study was approved by the University of Albany Institutional Review Board and the Patient Interest Committee of a community clinic. Setting The study was conducted in a low-income, urban neighbourhood in upstate New York. Subjects Medical record data and Veggie Rx programme data were analysed for fifty-four eligible participants. An equal-sized control group of patients who were not programme participants were matched on age, ethnicity and co-morbidity status. Results: A statistically significant difference in mean BMI change (P = 0.02) between the intervention and the control group was calculated. The intervention group had a mean decrease in BMI of 0.74 kg/m2. Conclusions Greater improvement in BMI was found among Veggie Rx programme participants. This information will guide programme changes and inform the field on the effectiveness of healthy food incentive programmes for improving health outcomes for low-income populations. PMID:27539192

  9. Strategies used to guide the design and implementation of a national river monitoring programme in South Africa.

    PubMed

    Roux, D J

    2001-06-01

    This article explores the strategies that were, and are being, used to facilitate the transition from scientific development to operational application of the South African River Health Programme (RHP). Theoretical models from the field of the management of technology are used to provide insight into the dynamics that influence the relationship between the creation and application of environmental programmes, and the RHP in particular. Four key components of the RHP design are analysed, namely the (a) guiding team, (b) concepts, tools and methods, (c) infra-structural innovations and (d) communication. These key components evolved over three broad life stages of the programme, which are called the design, growth and anchoring stages.

  10. The Field of Knowledge and the Policy Field in Education: PISA and the Production of Knowledge for Policy

    ERIC Educational Resources Information Center

    Mangez, Eric; Hilgers, Mathieu

    2012-01-01

    This article is about the Programme for International Student Assessment (PISA) and its actors. It analyses the development and role of PISA as a "cultural product" from the perspective of Bourdieu's field theory. The authors attempt to answer the following questions: Of which field is PISA the product? In which field and by whom is PISA…

  11. Reclaiming the Disengaged? A Bourdieuian Analysis of Work-Based Learning for Young People in England

    ERIC Educational Resources Information Center

    Thompson, Ron

    2011-01-01

    This paper uses Bourdieu's concept of field to analyse findings from an ethnographic study of Entry to Employment (E2E) programmes in England. Entry to Employment is a work-based learning programme which aims to re-engage young people with "barriers to learning" inhibiting access to further education, training or employment. The paper…

  12. Evaluation of an Innovative Programme for Training Teachers of Children with Learning and Behavioural Difficulties in New Zealand

    ERIC Educational Resources Information Center

    Pilgrim, Marcia; Hornby, Garry; Everatt, John; Macfarlane, Angus

    2017-01-01

    This article reports the views of recent graduates of a competency based, blended learning teacher education programme for specialist resource teachers of children with learning and behaviour difficulties in New Zealand. Identifying and developing the competencies needed by teachers in the field of special needs education is important in ensuring…

  13. Education in the New Era: The Dissemination of Education for Sustainable Development in the Political Science Programmes at Notre Dame University--Louaize

    ERIC Educational Resources Information Center

    Labaki, Georges

    2012-01-01

    Sustainable development is continuous process of change requiring painful choices resting on political will. This paper examines the developments needed to engage with sustainable development in the field of political science through the following: the reform in political science programmes to cope with the need for sustainable development in…

  14. "Discover, Understand, Implement, and Transfer": Effectiveness of an Intervention Programme to Motivate Students for Science

    ERIC Educational Resources Information Center

    Schütte, Kerstin; Köller, Olaf

    2015-01-01

    Considerable research has focused on how best to satisfy modern societies' needs for skilled labour in the field of science. The present study evaluated an intervention programme designed to increase secondary school students' motivation to pursue a science career. Students from 3 schools of the highest educational track participated for up to 2…

  15. The Development of Innovative Online Problem-Based Learning: A Leadership Course for Leaders in European Public Health

    ERIC Educational Resources Information Center

    de Jong, Nynke; Könings, Karen D.; Czabanowska, Katarzyna

    2014-01-01

    The shift to a knowledge information society has given rise to a need for lifelong learning programmes. Such programmes are especially relevant for public health professionals, whose dynamic field of practice is subject to changes due to rapidly developing technologies, evolving expectations of the labour market and new health treats. Lifelong…

  16. Overview of the Higher Education Systems in the Tempus Partner Countries: Central Asia. A Tempus Study. Issue 05

    ERIC Educational Resources Information Center

    Ruffio, Philippe; Heinamaki, Piia; Tchoukaline, Claire Chastang; Manthey, Anja; Reichboth, Veronika

    2011-01-01

    The main aim of the Tempus programme is to support the modernisation of higher education in Partner Countries outside the European Union. The targeted regions include Eastern Europe, Central Asia, Western Balkans and the Southern Mediterranean, with a total of 29 Partner Countries participating in the programme. In the field of cooperation in…

  17. The Creation of Multimedia Resources to Support the Gaelic Athletic Association (GAA) Coach Education Programme (CEP)

    ERIC Educational Resources Information Center

    Crotty, Yvonne; D'Arcy, Jimmy; Sweeney, David

    2016-01-01

    The Gaelic Athletic Association (GAA) is an Irish amateur sporting and cultural organisation. It represents in excess of 20,000 teams nationwide and is committed to supporting the development of players and coaches through its Coach Education Programme (CEP). A strategic goal of the CEP is to supplement the traditional field based coach education…

  18. The first Spanish space programme 1968 1974

    NASA Astrophysics Data System (ADS)

    Dorado, José M.

    2007-06-01

    This paper presents the situation of the Spanish aeronautical industry in the early 1960s, the problems suffered during the first ESRO years, the situation in 1975 as a result of the first National Space Programme (1968-1974) and the specific developments carried out within that programme: the first Spanish satellite successfully launched in 1974 (INTASAT) and the first INTA sounding rockets launched from the own Arenosillo range. This justifies the importance of that Programme for the Spanish aeronautical industry, a programme that permitted its transition to the aerospace field. In parallel, agreements with NASA led to the installation of large space ground stations in Spain operated by INTA personnel, to support major NASA space missions, and to the operation of a very active rockets range. These actions allowed Spain to have one of the largest space sectors in Europe, in those years. This paper's purpose is to find out the main reasons behind this effort.

  19. Stated Uptake of Physical Activity Rewards Programmes Among Active and Insufficiently Active Full-Time Employees.

    PubMed

    Ozdemir, Semra; Bilger, Marcel; Finkelstein, Eric A

    2017-10-01

    Employers are increasingly relying on rewards programmes in an effort to promote greater levels of activity among employees; however, if enrolment in these programmes is dominated by active employees, then they are unlikely to be a good use of resources. This study uses a stated-preference survey to better understand who participates in rewards-based physical activity programmes, and to quantify stated uptake by active and insufficiently active employees. The survey was fielded to a national sample of 950 full-time employees in Singapore between 2012 and 2013. Participants were asked to choose between hypothetical rewards programmes that varied along key dimensions and whether or not they would join their preferred programme if given the opportunity. A mixed logit model was used to analyse the data and estimate predicted uptake for specific programmes. We then simulated employer payments based on predictions for the percentage of each type of employee likely to meet the activity goal. Stated uptake ranged from 31 to 67% of employees, depending on programme features. For each programme, approximately two-thirds of those likely to enrol were insufficiently active. Results showed that insufficiently active employees, who represent the majority, are attracted to rewards-based physical activity programmes, and at approximately the same rate as active employees, even when enrolment fees are required. This suggests that a programme with generous rewards and a modest enrolment fee may have strong employee support and be within the range of what employers may be willing to spend.

  20. Using partial reconfiguration for SoC design and implementation

    NASA Astrophysics Data System (ADS)

    Krasteva, Yana E.; Portilla, Jorge; Tobajas Guerrero, Félix; de la Torre, Eduardo

    2009-05-01

    Most reconfigurable systems rely on FPGA technology. Among these ones, those which permit dynamic and partial reconfiguration, offer added benefits in flexibility, in-field device upgrade, improved design and manufacturing time, and even, in some cases, power consumption reductions. However, dynamic reconfiguration is a complex task, and the real benefits of its use in real applications have been often questioned. This paper presents an overview of the partial reconfiguration technique application, along with four original applications. The main goal of these applications is to test several architectures with different flexibility and, to search for the partial reconfiguration "killing application", that is, the application that better demonstrates the benefits of today reconfigurable systems based on commercial FPGAs. Therefore, the presented applications are rather a proof of concept, than fully operative and closed systems. First, a brief introduction to the partial reconfigurable systems application topic has been included. After that, the descriptions of the created reconfigurable systems are presented: first, an on-chip communications emulation framework, second, an on chip debugging system, third, a wireless sensor network reconfigurable node and finally, a remote reconfigurable client-server device. Each application is described in a separate section of the paper along with some test and results. General conclusions are included at the end of the paper.

  1. PixonVision real-time Deblurring Anisoplanaticism Corrector (DAC)

    NASA Astrophysics Data System (ADS)

    Hier, R. G.; Puetter, R. C.

    2007-09-01

    DigiVision, Inc. and PixonImaging LLC have teamed to develop a real-time Deblurring Anisoplanaticism Corrector (DAC) for the Army. The DAC measures the geometric image warp caused by anisoplanaticism and removes it to rectify and stabilize (dejitter) the incoming image. Each new geometrically corrected image field is combined into a running-average reference image. The image averager employs a higher-order filter that uses temporal bandpass information to help identify true motion of objects and thereby adaptively moderate the contribution of each new pixel to the reference image. This result is then passed to a real-time PixonVision video processor (see paper 6696-04 note, the DAC also first dehazes the incoming video) where additional blur from high-order seeing effects is removed, the image is spatially denoised, and contrast is adjusted in a spatially adaptive manner. We plan to implement the entire algorithm within a few large modern FPGAs on a circuit board for video use. Obvious applications are within the DOD, surveillance and intelligence, security and law enforcement communities. Prototype hardware is scheduled to be available in late 2008. To demonstrate the capabilities of the DAC, we present a software simulation of the algorithm applied to real atmosphere-corrupted video data collected by Sandia Labs.

  2. Configurable test bed design for nanosats to qualify commercial and customized integrated circuits

    NASA Astrophysics Data System (ADS)

    Guareschi, W.; Azambuja, J.; Kastensmidt, F.; Reis, R.; Durao, O.; Schuch, N.; Dessbesel, G.

    The use of small satellites has increased substantially in recent years due to the reduced cost of their development and launch, as well to the flexibility offered by commercial components. The test bed is a platform that allows components to be evaluated and tested in space. It is a flexible platform, which can be adjusted to a wide quantity of components and interfaces. This work proposes the design and implementation of a test bed suitable for test and evaluation of commercial circuits used in nanosatellites. The development of such a platform allows developers to reduce the efforts in the integration of components and therefore speed up the overall system development time. The proposed test bed is a configurable platform implemented using a Field Programmable Gate Array (FPGA) that controls the communication protocols and connections to the devices under test. The Flash-based ProASIC3E FPGA from Microsemi is used as a control system. This adaptive system enables the control of new payloads and softcores for test and validation in space. Thus, the integration can be easily performed through configuration parameters. It is intended for modularity. Each component connected to the test bed can have a specific interface programmed using a hardware description language (HDL). The data of each component is stored in embedded memories. Each component has its own memory space. The size of the allocated memory can be also configured. The data transfer priority can be set and packaging can be added to the logic, when needed. Communication with peripheral devices and with the Onboard Computer (OBC) is done through the pre-implemented protocols, such as I2C (Inter-Integrated Circuit), SPI (Serial Peripheral Interface) and external memory control. In loco primary tests demonstrated the control system's functionality. The commercial ProASIC3E FPGA family is not space-flight qualified, but tests have been made under Total Ionizing Dose (TID) showing its robustness up to 25 kr- ds (Si). When considering proton and heavy ions, flash-based FPGAs provide immunity to configuration loss and low bit-flips susceptibility in flash memory. In this first version of the test bed two components are connected to the controller FPGA: a commercial magnetometer and a hardened test chip. The embedded FPGA implements a Single Event Effects (SEE) hardened microprocessor and few other soft-cores to be used in space. This test bed will be used in the NanoSatC-BR1, the first Brazilian Cubesat scheduled to be launched in mid-2013.

  3. Rapid Corner Detection Using FPGAs

    NASA Technical Reports Server (NTRS)

    Morfopoulos, Arin C.; Metz, Brandon C.

    2010-01-01

    In order to perform precision landings for space missions, a control system must be accurate to within ten meters. Feature detection applied against images taken during descent and correlated against the provided base image is computationally expensive and requires tens of seconds of processing time to do just one image while the goal is to process multiple images per second. To solve this problem, this algorithm takes that processing load from the central processing unit (CPU) and gives it to a reconfigurable field programmable gate array (FPGA), which is able to compute data in parallel at very high clock speeds. The workload of the processor then becomes simpler; to read an image from a camera, it is transferred into the FPGA, and the results are read back from the FPGA. The Harris Corner Detector uses the determinant and trace to find a corner score, with each step of the computation occurring on independent clock cycles. Essentially, the image is converted into an x and y derivative map. Once three lines of pixel information have been queued up, valid pixel derivatives are clocked into the product and averaging phase of the pipeline. Each x and y derivative is squared against itself, as well as the product of the ix and iy derivative, and each value is stored in a WxN size buffer, where W represents the size of the integration window and N is the width of the image. In this particular case, a window size of 5 was chosen, and the image is 640 480. Over a WxN size window, an equidistance Gaussian is applied (to bring out the stronger corners), and then each value in the entire window is summed and stored. The required components of the equation are in place, and it is just a matter of taking the determinant and trace. It should be noted that the trace is being weighted by a constant k, a value that is found empirically to be within 0.04 to 0.15 (and in this implementation is 0.05). The constant k determines the number of corners available to be compared against a threshold sigma to mark a valid corner. After a fixed delay from when the first pixel is clocked in (to fill the pipeline), a score is achieved after each successive clock. This score corresponds with an (x,y) location within the image. If the score is higher than the predetermined threshold sigma, then a flag is set high and the location is recorded.

  4. Monitoring Digital Closed-Loop Feedback Systems

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2011-01-01

    A technique of monitoring digital closed-loop feedback systems has been conceived. The basic idea is to obtain information on the performances of closed-loop feedback circuits in such systems to aid in the determination of the functionality and integrity of the circuits and of performance margins. The need for this technique arises as follows: Some modern digital systems include feedback circuits that enable other circuits to perform with precision and are tolerant of changes in environment and the device s parameters. For example, in a precision timing circuit, it is desirable to make the circuit insensitive to variability as a result of the manufacture of circuit components and to the effects of temperature, voltage, radiation, and aging. However, such a design can also result in masking the indications of damaged and/or deteriorating components. The present technique incorporates test circuitry and associated engineering-telemetry circuitry into an embedded system to monitor the closed-loop feedback circuits, using spare gates that are often available in field programmable gate arrays (FPGAs). This technique enables a test engineer to determine the amount of performance margin in the system, detect out of family circuit performance, and determine one or more trend(s) in the performance of the system. In one system to which the technique has been applied, an ultra-stable oscillator is used as a reference for internal adjustment of 12 time-to-digital converters (TDCs). The feedback circuit produces a pulse-width-modulated signal that is fed as a control input into an amplifier, which controls the circuit s operating voltage. If the circuit s gates are determined to be operating too slowly or rapidly when their timing is compared with that of the reference signal, then the pulse width increases or decreases, respectively, thereby commanding the amplifier to increase or reduce, respectively, its output level, and "adjust" the speed of the circuits. The nominal frequency of the TDC s pulse width modulated outputs is approximately 40 kHz. In this system, the technique is implemented by means of a monitoring circuit that includes a 20-MHz sampling circuit and a 24-bit accumulator with a gate time of 10 ms. The monitoring circuit measures the duty cycle of each of the 12 TDCs at a repetition rate of 28 Hz. The accumulator content is reset to all zeroes at the beginning of each measurement period and is then incremented or decremented based of the value of the state of the pulse width modulated signal. Positive or negative values in the accumulator correspond to duty cycles greater or less, respectively, than 50 percent.

  5. Evolution of the Space Shuttle Primary Avionics Software and Avionics for Shuttle Derived Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.

    2011-01-01

    As a result of recommendation from the Augustine Panel, the direction for Human Space Flight has been altered from the original plan referred to as Constellation. NASA s Human Exploration Framework Team (HEFT) proposes the use of a Shuttle Derived Heavy Lift Launch Vehicle (SDLV) and an Orion derived spacecraft (salvaged from Constellation) to support a new flexible direction for space exploration. The SDLV must be developed within an environment of a constrained budget and a preferred fast development schedule. Thus, it has been proposed to utilize existing assets from the Shuttle Program to speed development at a lower cost. These existing assets should not only include structures such as external tanks or solid rockets, but also the Flight Software which has traditionally been a "long pole" in new development efforts. The avionics and software for the Space Shuttle was primarily developed in the 70 s and considered state of the art for that time. As one may argue that the existing avionics and flight software may be too outdated to support the new SDLV effort, this is a fallacy if they can be evolved over time into a "modern avionics" platform. The technology may be outdated, but the avionics concepts and flight software algorithms are not. The reuse of existing avionics and software also allows for the reuse of development, verification, and operations facilities. The keyword is evolve in that these assets can support the fast development of such a vehicle, but then be gradually evolved over time towards more modern platforms as budget and schedule permits. The "gold" of the flight software is the "control loop" algorithms of the vehicle. This is the Guidance, Navigation, and Control (GNC) software algorithms. This software is typically the most expensive to develop, test, and verify. Thus, the approach is to preserve the GNC flight software, while first evolving the supporting software (such as Command and Data Handling, Caution and Warning, Telemetry, etc.). This can be accomplished by gradually removing the "support software" from the legacy flight software leaving only the GNC algorithms. The "support software" could be re-developed for modern platforms, while leaving the GNC algorithms to execute on technology compatible with the legacy system. It is also possible to package the GNC algorithms into an emulated version of the original computer (via Field Programmable Gate Arrays or FPGAs), thus becoming a "GNC on a Chip" solution where it could live forever to be embedded in modern avionics platforms.

  6. Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination

    PubMed Central

    2012-01-01

    Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725

  7. Compact field programmable gate array-based pulse-sequencer and radio-frequency generator for experiments with trapped atoms.

    PubMed

    Pruttivarasin, Thaned; Katori, Hidetoshi

    2015-11-01

    We present a compact field-programmable gate array (FPGA) based pulse sequencer and radio-frequency (RF) generator suitable for experiments with cold trapped ions and atoms. The unit is capable of outputting a pulse sequence with at least 32 transistor-transistor logic (TTL) channels with a timing resolution of 40 ns and contains a built-in 100 MHz frequency counter for counting electrical pulses from a photo-multiplier tube. There are 16 independent direct-digital-synthesizers RF sources with fast (rise-time of ∼60 ns) amplitude switching and sub-mHz frequency tuning from 0 to 800 MHz.

  8. Evolutionary Multiobjective Design Targeting a Field Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Aguirre, Arturo Hernandez; Zebulum, Ricardo S.; Coello, Carlos Coello

    2004-01-01

    This paper introduces the ISPAES algorithm for circuit design targeting a Field Programmable Transistor Array (FPTA). The use of evolutionary algorithms is common in circuit design problems, where a single fitness function drives the evolution process. Frequently, the design problem is subject to several goals or operating constraints, thus, designing a suitable fitness function catching all requirements becomes an issue. Such a problem is amenable for multi-objective optimization, however, evolutionary algorithms lack an inherent mechanism for constraint handling. This paper introduces ISPAES, an evolutionary optimization algorithm enhanced with a constraint handling technique. Several design problems targeting a FPTA show the potential of our approach.

  9. Compact field programmable gate array-based pulse-sequencer and radio-frequency generator for experiments with trapped atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruttivarasin, Thaned, E-mail: thaned.pruttivarasin@riken.jp; Katori, Hidetoshi; Innovative Space-Time Project, ERATO, JST, Bunkyo-ku, Tokyo 113-8656

    We present a compact field-programmable gate array (FPGA) based pulse sequencer and radio-frequency (RF) generator suitable for experiments with cold trapped ions and atoms. The unit is capable of outputting a pulse sequence with at least 32 transistor-transistor logic (TTL) channels with a timing resolution of 40 ns and contains a built-in 100 MHz frequency counter for counting electrical pulses from a photo-multiplier tube. There are 16 independent direct-digital-synthesizers RF sources with fast (rise-time of ∼60 ns) amplitude switching and sub-mHz frequency tuning from 0 to 800 MHz.

  10. Radiation Hardening by Software Techniques on FPGAs: Flight Experiment Evaluation and Results

    NASA Technical Reports Server (NTRS)

    Schmidt, Andrew G.; Flatley, Thomas

    2017-01-01

    We present our work on implementing Radiation Hardening by Software (RHBSW) techniques on the Xilinx Virtex5 FPGAs PowerPC 440 processors on the SpaceCube 2.0 platform. The techniques have been matured and tested through simulation modeling, fault emulation, laser fault injection and now in a flight experiment, as part of the Space Test Program- Houston 4-ISS SpaceCube Experiment 2.0 (STP-H4-ISE 2.0). This work leverages concepts such as heartbeat monitoring, control flow assertions, and checkpointing, commonly used in the High Performance Computing industry, and adapts them for use in remote sensing embedded systems. These techniques are extremely low overhead (typically <1.3%), enabling a 3.3x gain in processing performance as compared to the equivalent traditionally radiation hardened processor. The recently concluded STP-H4 flight experiment was an opportunity to upgrade the RHBSW techniques for the Virtex5 FPGA and demonstrate them on-board the ISS to achieve TRL 7. This work details the implementation of the RHBSW techniques, that were previously developed for the Virtex4-based SpaceCube 1.0 platform, on the Virtex5-based SpaceCube 2.0 flight platform. The evaluation spans the development and integration with flight software, remotely uploading the new experiment to the ISS SpaceCube 2.0 platform, and conducting the experiment continuously for 16 days before the platform was decommissioned. The experiment was conducted on two PowerPCs embedded within the Virtex5 FPGA devices and the experiment collected 19,400 checkpoints, processed 253,482 status messages, and incurred 0 faults. These results are highly encouraging and future work is looking into longer duration testing as part of the STP-H5 flight experiment.

  11. First results of the silicon telescope using an 'artificial retina' for fast track finding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neri, N.; Abba, A.; Caponio, F.

    We present the first results of the prototype of a silicon tracker with trigger capabilities based on a novel approach for fast track finding. The working principle of the 'artificial retina' is inspired by the processing of visual images by the brain and it is based on extensive parallelization of data distribution and pattern recognition. The algorithm has been implemented in commercial FPGAs in three main logic modules: a switch for the routing of the detector hits, a pool of engines for the digital processing of the hits, and a block for the calculation of the track parameters. The architecturemore » is fully pipelined and allows the reconstruction of real-time tracks with a latency less then 100 clock cycles, corresponding to 0.25 microsecond at 400 MHz clock. The silicon telescope consists of 8 layers of single-sided silicon strip detectors with 512 strips each. The detector size is about 10 cm x 10 cm and the strip pitch is 183 μm. The detectors are read out by the Beetle chip, a custom ASICs developed for LHCb, which provides the measurement of the hit position and pulse height of 128 channels. The 'artificial retina' algorithm has been implemented on custom data acquisition boards based on FPGAs Xilinx Kintex 7 lx160. The parameters of the tracks detected are finally transferred to host PC via USB 3.0. The boards manage the read-out ASICs and the sampling of the analog channels. The read-out is performed at 40 MHz on 4 channels for each ASIC that corresponds to a decoding of the telescope information at 1.1 MHz. We report on the first results of the fast tracking device and compare with simulations. (authors)« less

  12. Some innovative programmes in Astronomy education

    NASA Astrophysics Data System (ADS)

    Babu, G. S. D.; Sujatha, S.

    In order to inculcate a systematic scientific awareness of the subject of Astronomy among the students and to motivate them to pursue careers in Astronomy and Astrophysics, various innovative educational programmes have been designed at MPBIFR. Among them, the main programme is termed as the ``100-hour Certificate Course in Astronomy and Astrophysics'' which has been designed basically for the students of the undergraduate level of B.Sc. and B.E. streams. The time duration of the 100 hours in this course is partitioned as 36 hours of classroom lectures, 34 hours of practicals and field trips and the remaining 30 hours being dedicated to dissertation writing and seminar presentations by the students. In addition, after the 100-hour course, the students have the option to take up specialized advance courses in the topics of Astrobiology, Astrochemistry, Radio Astronomy, Solar Astronomy and Cosmology as week-end classes. These courses are at the post graduate level and are covered in a span of 18 to 20 hours spread over a period of 9 to 10 weeks. As a preparatory programme, short-term introductory courses in the same subject are conducted for the high school students during the summer vacation period. Along with this, a three-week programme in basic Astronomy is also designed as an educational package for the general public. The students of these courses have the opportunity of being taken on field trips to various astronomical centers as well as the Radio, Solar and the Optical Observatories as part of their curriculum. The guided trips to the ISRO’s Satellite Centre at Bangalore and the Satellite Launching Station at SHAR provide high degree of motivation apart from giving thrilling experiences to the students. Further, the motivated students are encouraged to involve themselves in regular research programmes in Astronomy at MPBIFR for publishing research papers in national and international journals. The teaching and mentoring faculty for all these programmes includes the visiting Scientists and Professors from various Research Organizations located in and around Bangalore as well as the in-house Scientific staff. It is gratifying to note that several students, after going through one or more of these courses, have indeed made commitments to pursue Astronomy as their career, some of them even obtaining admissions in to the institutes and universities in India and abroad for further studies in this field.

  13. Programmable shunts and headphones: Are they safe together?

    PubMed

    Spader, Heather S; Ratanaprasatporn, Linda; Morrison, John F; Grossberg, Jonathan A; Cosgrove, G Rees

    2015-10-01

    Programmable shunts have a valuable role in the treatment of patients with hydrocephalus, but because a magnet is used to change valve settings, interactions with external magnets may reprogram these shunts. Previous studies have demonstrated the ability of magnetic toys and iPads to erroneously reprogram shunts. Headphones are even more ubiquitous, and they contain an electromagnet for sound projection that sits on the head very close to the shunt valve. This study is the first to look at the magnetic field emissions of headphones and their effect on reprogrammable shunt valves to ascertain whether headphones are safe for patients with these shunts to wear. In this in vitro study of the magnetic properties of headphones and their interactions with 3 different programmable shunts, the authors evaluated Apple earbuds, Beats by Dr. Dre, and Bose QuietComfort Acoustic Noise Cancelling headphones. Each headphone was tested for electromagnetic field emissions using a direct current gaussmeter. The following valves were evaluated: Codman Hakim programmable valve, Medtronic Strata II valve, and Aesculap proGAV. Each valve was tested at distances of 0 to 50 mm (in 5-mm increments) from each headphone. The exposure time at each distance was 1 minute, and 3 trials were performed to confirm results at each valve setting and distance. All 3 headphones generated magnetic fields greater than the respective shunt manufacturer's recommended strength of exposure, but these fields did not persist beyond 5 mm. By 2 cm, the fields levels were below 20 G, well below the Medtronic recommendation of 90 G and the Codman recommendation of 80 G. Because the mechanism for the proGAV is different, there is no recommended gauss level. There was no change in gauss-level emissions by the headphones with changes in frequency and amplitude. Both the Strata and Codman-Hakim valves were reprogrammed by direct contact (distance 0 mm) with the Bose headphones. When a rotation component was added, all 3 headphones reprogrammed the Strata and Codman-Hakim valves at 0 mm. At all distances above 0 mm, the headphones did not affect the shunts. The proGAV valve was not affected by headphones at any distance. Although all the headphones studied generated significant gauss fields at distances less than 5 mm, the programmable valve settings only changed at a distance of 0 mm (i.e., with direct contact). Given the subcutaneous location of the valve, the authors conclude that is highly unlikely that commercially available or customary headphones can contribute to the reprogramming of shunts.

  14. Current status of master of public health programmes in India: a scoping review.

    PubMed

    Tiwari, Ritika; Negandhi, Himanshu; Zodpey, Sanjay

    2018-04-01

    There is a recognized need to improve training in public health in India. Currently, several Indian institutions and universities offer the Master of Public Health (MPH) programme. However, in the absence of any formal body or council for regulating public health education in the country, there is limited information available on these programmes. This scoping review was therefore undertaken to review the current status of MPH programmes in India. Information on MPH programmes was obtained using a two-step process. First, a list of all institutions offering MPH programmes in India was compiled by use of an internet and literature search. Second, detailed information on each programme was collected via an internet and literature search and through direct contact with the institutions and recognized experts in public health education. Between 1997 and 2016-2017, the number of institutions offering MPH programmes increased from 2 to 44. The eligibility criteria for the MPH programmes are variable. All programmes include some field experience. The ratio of faculty number to students enrolled ranged from 1:0.1 to 1:42. In the 2016-2017 academic year, 1190 places were being offered on MPH programmes but only 704 students were enrolled. MPH programmes being offered in India have witnessed a rapid expansion in the past two decades. This growth in supply of public health graduates is not yet matched by an increased demand. Despite the recognized need to strengthen the public health workforce in India, there is no clearly defined career pathway for MPH graduates in the national public health infrastructure. Institutions and public health bodies must collaborate to design and deliver MPH programmes to overcome the shortage of public health professionals, such that the development goals for India might be met.

  15. Research Capacity Strengthening in Low and Middle Income Countries - An Evaluation of the WHO/TDR Career Development Fellowship Programme.

    PubMed

    Käser, Michael; Maure, Christine; Halpaap, Beatrice M M; Vahedi, Mahnaz; Yamaka, Sara; Launois, Pascal; Casamitjana, Núria

    2016-05-01

    Between August 2012 and April 2013 the Career Development Fellowship programme of the Special Programme for Research and Training in Tropical Diseases (World Health Organization) underwent an external evaluation to assess its past performance and determine recommendations for future programme development and continuous performance improvement. The programme provides a year-long training experience for qualified researchers from low and middle income countries at pharmaceutical companies or product development partnerships. Independent evaluators from the Swiss Tropical and Public Health Institute and the Barcelona Institute for Global Health used a results-based methodology to review the programme. Data were gathered through document review, surveys, and interviews with a range of programme participants. The final evaluation report found the Career Development Fellowship to be relevant to organizers' and programme objectives, efficient in its operations, and effective in its training scheme, which was found to address needs and gaps for both fellows and their home institutions. Evaluators found that the programme has the potential for impact and sustainability beyond the programme period, especially with the successful reintegration of fellows into their home institutions, through which newly-developed skills can be shared at the institutional level. Recommendations included the development of a scheme to support the re-integration of fellows into their home institutions post-fellowship and to seek partnerships to facilitate the scaling-up of the programme. The impact of the Professional Membership Scheme, an online professional development tool launched through the programme, beyond the scope of the Career Development Fellowship programme itself to other applications, has been identified as a positive unintended outcome. The results of this evaluation may be of interest for other efforts in the field of research capacity strengthening in LMICs or, generally, to other professional development schemes of a similar structure.

  16. Coordination and Data Management of the International Arctic Buoy Programme (IABP)

    DTIC Science & Technology

    2002-09-30

    for forcing, validation and assimilation into numerical climate models , and for forecasting weather and ice conditions. TRANSITIONS Using IABP ...Coordination and Data Management of the International Arctic Buoy Programme ( IABP ) Ignatius G. Rigor 1013 NE 40th Street Polar Science Center...analyzed geophysical fields. APPROACH The IABP is a collaboration between 25 different institutions from 8 different countries, which work together

  17. Marine Sciences in CMEA Countries: Programme and Results of Co-operation. Unesco Reports in Marine Science No. 38.

    ERIC Educational Resources Information Center

    Aksionov, A. A.

    In 1971, the 25th Session of the Council for Mutual Economic Assistance (CMEA) adopted a Programme for the Development of Socialist Economic Integration. Later, part of this program became a program of cooperation in the field of oceanography, particularly the chemical, physical, and biological processes of certain important areas of the ocean. To…

  18. Evaluative Study of M.A. Education Programmes of Teacher Education at Higher Education Level in Pakistan

    ERIC Educational Resources Information Center

    Fatima, Jabeen; Naseer Ud Din, Muhammad

    2010-01-01

    The study was aimed at evaluating the MA Education Programme of teacher education in Pakistan. Post-graduate teacher's training institutes in Pakistan grant the Master of Education (MA/M.Ed.), Master of Philosophy (M.Phil) and Doctor of Philosophy (Ph.D) post-graduate degrees in the field of education to enhance the careers and accelerate the…

  19. Student Teachers of Technology and Design into Industry: A Northern Ireland Case Study

    ERIC Educational Resources Information Center

    Gibson, Ken

    2013-01-01

    This paper, based in Northern Ireland, is a case study of an innovative programme which places year 3 B.Ed. post-primary student teachers of Technology and Design into industry for a five-day period. The industrial placement programme is set in an international context of evolving pre-service field placements and in a local context defined by the…

  20. Making Fieldwork Valuable: Designing fieldwork programmes to meet the needs of young geologists

    NASA Astrophysics Data System (ADS)

    Thorne, Michael

    2016-04-01

    This work presents the culmination of many years' in designing and operating field courses for students studying Geology at post-16 level in the context of the British schooling system. Provided is a toolkit, and accompanying rationale, for the educators use when building a sustainable and manageable programme of fieldwork for young geologists. Many educators, particularly under the confines of new regulations have found the promise of increased paper work and accountability challenging and consequently field courses often play a peripheral, even non-existent role in the scheme of work for a large number of young geologists. The process of designing a suitable programme of field study must take account of the relevant stakeholders, chief among these are the views of students and staff but also those of parents, potential destination universities, exam boards and qualification accrediting groups. An audit of desired characteristics a programme of fieldwork would contain was completed using information gained through first hand research with students as well as in conversation with local universities. The results of this audit highlighted several confining factors ranging from the potential cost implications for school and parents, the extent to which content would support learning in class, and the feasibility of achieving all characteristics given limitations on staff and time. Student perceptions of the value of fieldwork were gauged through various means; group interviews were conducted during a number of academic years, field course evaluations were completed following excursions, and questionnaires were distributed at the close of the 2014-2015 academic year. Findings demonstrated that student perceptions of the benefits offered by fieldwork were several fold; chiefly students felt the inclusion of fieldwork was a very important motivator in their decision to study the subject and maintain curiosity in their studies, the belief that fieldwork acts as a consolidator to abstract ideas in class and the importance of its role in team building exercises were also broadly held views. The strength of opinion demonstrated by students reinforces the importance of decisions made regarding fieldwork. Following the initial auditing stage potential field sites were then investigated by staff and assessed for their potential to meet the desired characteristics, where promise was shown these localities were then developed into individual courses where discrete skills could be developed. By assembling together the range of learning outcomes from each individual field trip a narrative 'learning journey' was developed with a clear end goal. Having been through this process and seeing the positive effects on student progress this work presents a toolkit to educators to provide assistance and framework in the development of further programmes of field study through equally considered design.

Top