Science.gov

Sample records for level fpga trigger

  1. FPGA Trigger System to Run Klystrons

    SciTech Connect

    Gray, Darius; /Texas A-M /SLAC

    2010-08-25

    The Klystron Department is in need of a new trigger system to update the laboratory capabilities. The objective of the research is to develop the trigger system using Field Programmable Gate Array (FPGA) technology with a user interface that will allow one to communicate with the FPGA via a Universal Serial Bus (USB). This trigger system will be used for the testing of klystrons. The key materials used consists of the Xilinx Integrated Software Environment (ISE) Foundation, a Programmable Read Only Memory (Prom) XCF04S, a Xilinx Spartan 3E 35S500E FPGA, Xilinx Platform Cable USB II, a Printed Circuit Board (PCB), a 100 MHz oscillator, and an oscilloscope. Key considerations include eight triggers, two of which have variable phase shifting capabilities. Once the project was completed the output signals were able to be manipulated via a Graphical User Interface by varying the delay and width of the signal. This was as planned; however, the ability to vary the phase was not completed. Future work could consist of being able to vary the phase. This project will give the operators in the Klystron Department more flexibility to run various tests.

  2. A Pattern Recognition Mezzanine based on Associative Memory and FPGA technology for Level 1 Track Triggers for the HL-LHC upgrade

    NASA Astrophysics Data System (ADS)

    Magalotti, D.; Alunni, L.; Biesuz, N.; Bilei, G. M.; Citraro, S.; Crescioli, F.; Fanò, L.; Fedi, G.; Magazzù, G.; Servoli, L.; Storchi, L.; Palla, F.; Placidi, P.; Rossi, E.; Spiezia, A.

    2016-02-01

    The increment of luminosity at HL-LHC will require the introduction of tracker information at Level-1 trigger system for the experiments in order to maintain an acceptable trigger rate for selecting interesting events despite the one order of increased magnitude in the minimum bias interactions. In order to extract the track information in the required latency (~ 5-10 μ s depending on the experiment), a dedicated hardware processor needs to be used. We here propose a prototype system (Pattern Recognition Mezzanine) as core of pattern recognition and track fitting for HL-LHC experiments, combining the power of both Associative Memory custom ASIC and modern Field Programmable Gate Array (FPGA) devices.

  3. FPGA-based Trigger System for the Fermilab SeaQuest Experimentz

    SciTech Connect

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; Chang, Ting-Hua; Chang, Wen-Chen; Chen, Yen-Chu; Gilman, Ron; Nakano, Kenichi; Peng, Jen-Chieh; Wang, Su-Yin

    2015-09-10

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ-produced in 120 GeV/c proton–nucleon interactions in a high rate environment. The trigger system we used consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined against pre-determined trigger matrices to identify candidate muon tracks. Finally, information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.

  4. FPGA-based Trigger System for the Fermilab SeaQuest Experimentz

    DOE PAGESBeta

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; Chang, Ting-Hua; Chang, Wen-Chen; Chen, Yen-Chu; Gilman, Ron; Nakano, Kenichi; Peng, Jen-Chieh; Wang, Su-Yin

    2015-09-10

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ-produced in 120 GeV/c proton–nucleon interactions in a high rate environment. The trigger system we used consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined againstmore » pre-determined trigger matrices to identify candidate muon tracks. Finally, information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.« less

  5. FPGA-based trigger system for the Fermilab SeaQuest experimentz

    NASA Astrophysics Data System (ADS)

    Shiu, Shiuan-Hal; Wu, Jinyuan; McClellan, Randall Evan; Chang, Ting-Hua; Chang, Wen-Chen; Chen, Yen-Chu; Gilman, Ron; Nakano, Kenichi; Peng, Jen-Chieh; Wang, Su-Yin

    2015-12-01

    The SeaQuest experiment (Fermilab E906) detects pairs of energetic μ+ and μ- produced in 120 GeV/c proton-nucleon interactions in a high rate environment. The trigger system consists of several arrays of scintillator hodoscopes and a set of field-programmable gate array (FPGA) based VMEbus modules. Signals from up to 96 channels of hodoscope are digitized by each FPGA with a 1-ns resolution using the time-to-digital convertor (TDC) firmware. The delay of the TDC output can be adjusted channel-by-channel in 1-ns step and then re-aligned with the beam RF clock. The hit pattern on the hodoscope planes is then examined against pre-determined trigger matrices to identify candidate muon tracks. Information on the candidate tracks is sent to the 2nd-level FPGA-based track correlator to find candidate di-muon events. The design and implementation of the FPGA-based trigger system for SeaQuest experiment are presented.

  6. FPGA Based Wavelet Trigger in Radio Detection of Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Szadkowski, Zbigniew; Szadkowska, Anna

    2014-12-01

    Experiments which show coherent radio emission from extensive air showers induced by ultra-high-energy cosmic rays are designed for a detailed study of the development of the electromagnetic part of air showers. Radio detectors can operate with 100 % up time as, e.g., surface detectors based on water-Cherenkov tanks. They are being developed for ground-based experiments (e.g., the Pierre Auger Observatory) as another type of air-shower detector in addition to fluorescence detectors, which operate with only ˜10 % of duty on dark nights. The radio signals from air showers are caused by coherent emission from geomagnetic radiation and charge-excess processes. The self-triggers in radio detectors currently in use often generate a dense stream of data, which is analyzed afterwards. Huge amounts of registered data require significant manpower for off-line analysis. Improvement of trigger efficiency is a relevant factor. The wavelet trigger, which investigates on-line the power of radio signals (˜ V2/ R), is promising; however, it requires some improvements with respect to current designs. In this work, Morlet wavelets with various scaling factors were used for an analysis of real data from the Auger Engineering Radio Array and for optimization of the utilization of the resources in an FPGA. The wavelet analysis showed that the power of events is concentrated mostly in a limited range of the frequency spectrum (consistent with a range imposed by the input analog band-pass filter). However, we found several events with suspicious spectral characteristics, where the signal power is spread over the full band-width sampled by a 200 MHz digitizer with significant contribution of very high and very low frequencies. These events may not originate from cosmic ray showers but could be the result of human contamination. The engine of the wavelet analysis can be implemented in the modern powerful FPGAs and can remove suspicious events on-line to reduce the trigger rate.

  7. FPGA Based Wavelet Trigger in Radio Detection of Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Szadkowski, Zbigniew; Szadkowska, Anna

    2014-09-01

    Experiments which show coherent radio emission from extensive air showers induced by ultra-high-energy cosmic rays are designed for a detailed study of the development of the electromagnetic part of air showers. Radio detectors can operate with 100 % up time as, e.g., surface detectors based on water-Cherenkov tanks. They are being developed for ground-based experiments (e.g., the Pierre Auger Observatory) as another type of air-shower detector in addition to fluorescence detectors, which operate with only ˜10 % of duty on dark nights. The radio signals from air showers are caused by coherent emission from geomagnetic radiation and charge-excess processes. The self-triggers in radio detectors currently in use often generate a dense stream of data, which is analyzed afterwards. Huge amounts of registered data require significant manpower for off-line analysis. Improvement of trigger efficiency is a relevant factor. The wavelet trigger, which investigates on-line the power of radio signals (˜V2/R), is promising; however, it requires some improvements with respect to current designs. In this work, Morlet wavelets with various scaling factors were used for an analysis of real data from the Auger Engineering Radio Array and for optimization of the utilization of the resources in an FPGA. The wavelet analysis showed that the power of events is concentrated mostly in a limited range of the frequency spectrum (consistent with a range imposed by the input analog band-pass filter). However, we found several events with suspicious spectral characteristics, where the signal power is spread over the full band-width sampled by a 200 MHz digitizer with significant contribution of very high and very low frequencies. These events may not originate from cosmic ray showers but could be the result of human contamination. The engine of the wavelet analysis can be implemented in the modern powerful FPGAs and can remove suspicious events on-line to reduce the trigger rate.

  8. Tiny Triplet Finder (TTF) - a track segment recognition scheme and its FPGA implementation developed in the BTeV level 1 trigger system

    SciTech Connect

    Wu, Jin-Yuan; Shi, Z.; Wang, M.; Garcia, H.; Gottschalk, E.; /Fermilab

    2004-11-01

    We describe a track segment recognition scheme called the Tiny Triplet Finder (TTF) that involves the grouping of three hits satisfying a constraint, for example, forming a straight line. The TTF performs this O(n{sup 3}) function in O(n)time. The logic element usage in FPGA implementations of typical track segment recognition functions are O(N{sup 2}), where N is the number of bins in the coordinate considered, while that for the TTF is O( log( )), which is significantly smaller for large N. The TTF is also suitable for software implementation and many other pattern recognition problems.

  9. A pattern recognition mezzanine based on associative memory and FPGA technology for L1 track triggering at HL-LHC

    NASA Astrophysics Data System (ADS)

    Alunni, L.; Biesuz, N.; Bilei, G. M.; Citraro, S.; Crescioli, F.; Fanò, L.; Fedi, G.; Magalotti, D.; Magazzù, G.; Servoli, L.; Storchi, L.; Palla, F.; Placidi, P.; Papi, A.; Piadyk, Y.; Rossi, E.; Spiezia, A.

    2016-07-01

    The increase of luminosity at HL-LHC will require the introduction of tracker information at Level-1 trigger system for the experiments to maintain an acceptable trigger rate to select interesting events despite the one order of magnitude increase in the minimum bias interactions. To extract in the required latency the track information a dedicated hardware has to be used. We present the tests of a prototype system (Pattern Recognition Mezzanine) as core of pattern recognition and track fitting for HL-LHC ATLAS and CMS experiments, combining the power of both Associative Memory custom ASIC and modern Field Programmable Gate Array (FPGA) devices.

  10. The Level 0 Trigger Processor for the NA62 experiment

    NASA Astrophysics Data System (ADS)

    Chiozzi, S.; Gamberini, E.; Gianoli, A.; Mila, G.; Neri, I.; Petrucci, F.; Soldi, D.

    2016-07-01

    In the NA62 experiment at CERN, the intense flux of particles requires a high-performance trigger for the data acquisition system. A Level 0 Trigger Processor (L0TP) was realized, performing the event selection based on trigger primitives coming from sub-detectors and reducing the trigger rate from 10 to 1 MHz. The L0TP is based on a commercial FPGA device and has been implemented in two different solutions. The performance of the two systems are highlighted and compared.

  11. The high-level trigger of ALICE

    NASA Astrophysics Data System (ADS)

    Tilsner, H.; Alt, T.; Aurbakken, K.; Grastveit, G.; Helstrup, H.; Lindenstruth, V.; Loizides, C.; Nystrand, J.; Roehrich, D.; Skaali, B.; Steinbeck, T.; Ullaland, K.; Vestbo, A.; Vik, T.

    One of the main tracking detectors of the forthcoming ALICE Experiment at the LHC is a cylindrical Time Projection Chamber (TPC) with an expected data volume of about 75 MByte per event. This data volume, in combination with the presumed maximum bandwidth of 1.2 GByte/s to the mass storage system, would limit the maximum event rate to 20 Hz. In order to achieve higher event rates, online data processing has to be applied. This implies either the detection and read-out of only those events which contain interesting physical signatures or an efficient compression of the data by modeling techniques. In order to cope with the anticipated data rate, massive parallel computing power is required. It will be provided in form of a clustered farm of SMP-nodes, based on off-the-shelf PCs, which are connected with a high bandwidth low overhead network. This High-Level Trigger (HLT) will be able to process a data rate of 25 GByte/s online. The front-end electronics of the individual sub-detectors is connected to the HLT via an optical link and a custom PCI card which is mounted in the clustered PCs. The PCI card is equipped with an FPGA necessary for the implementation of the PCI-bus protocol. Therefore, this FPGA can also be used to assist the host processor with first-level processing. The first-level processing done on the FPGA includes conventional cluster-finding for low multiplicity events and local track finding based on the Hough Transformation of the raw data for high multiplicity events. PACS: 07.05.-t Computers in experimental physics - 07.05.Hd Data acquisition: hardware and software - 29.85.+c Computer data analysis

  12. The CMS high level trigger

    NASA Astrophysics Data System (ADS)

    Gori, Valentina

    2014-05-01

    The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running on the available computing power, the sustainable output rate, and the selection efficiency. Here we will present the performance of the main triggers used during the 2012 data taking, ranging from simpler single-object selections to more complex algorithms combining different objects, and applying analysis-level reconstruction and selection. We will discuss the optimisation of the triggers and the specific techniques to cope with the increasing LHC pile-up, reducing its impact on the physics performance.

  13. The CMS High Level Trigger

    NASA Astrophysics Data System (ADS)

    Trocino, Daniele

    2014-06-01

    The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger, implemented in custom-designed electronics, and the High-Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running with the available computing power, the sustainable output rate, and the selection efficiency. We present the performance of the main triggers used during the 2012 data taking, ranging from simple single-object selections to more complex algorithms combining different objects, and applying analysis-level reconstruction and selection. We discuss the optimisation of the trigger and the specific techniques to cope with the increasing LHC pile-up, reducing its impact on the physics performance.

  14. FPGA-based trigger system for the LUX dark matter experiment

    NASA Astrophysics Data System (ADS)

    Akerib, D. S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Bradley, A.; Bramante, R.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Chapman, J. J.; Chiller, A. A.; Chiller, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; de Viveiros, L.; Dobi, A.; Dobson, J. E. Y.; Druszkiewicz, E.; Edwards, B. N.; Faham, C. H.; Fiorucci, S.; Gaitskell, R. J.; Gehman, V. M.; Ghag, C.; Gibson, K. R.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Ihm, M.; Jacobsen, R. G.; Ji, W.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lee, C.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Malling, D. C.; Manalaysay, A. G.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O`Sullivan, K.; Oliver-Mallory, K. C.; Ott, R. A.; Palladino, K. J.; Pangilinan, M.; Pease, E. K.; Phelps, P.; Reichhart, L.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Skulski, W.; Solovov, V. N.; Sorensen, P.; Stephenson, S.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Yin, J.; Young, S. K.; Zhang, C.

    2016-05-01

    LUX is a two-phase (liquid/gas) xenon time projection chamber designed to detect nuclear recoils resulting from interactions with dark matter particles. Signals from the detector are processed with an FPGA-based digital trigger system that analyzes the incoming data in real-time, with just a few microsecond latency. The system enables first pass selection of events of interest based on their pulse shape characteristics and 3D localization of the interactions. It has been shown to be > 99 % efficient in triggering on S2 signals induced by only few extracted liquid electrons. It is continuously and reliably operating since its full underground deployment in early 2013. This document is an overview of the systems capabilities, its inner workings, and its performance.

  15. Implementation of FPGA-based level-1 tracking at CMS for the HL-LHC

    NASA Astrophysics Data System (ADS)

    Chaves, J.

    2014-10-01

    A new approach for track reconstruction is presented to be used in the all-hardware first level of the CMS trigger. The application of the approach is intended for the upgraded all-silicon tracker, which is to be installed for the High Luminosity era of the LHC (HL-LHC). The upgraded LHC machine is expected to deliver a luminosity on the order of 5 × 1034 cm-2s-1. This expected luminosity means there would be about 125 pileup events in each bunch crossing at a frequency of 40 MHz. To keep the CMS trigger rate at a manageable level under these conditions, it is necessary to make quick decisions on the events that will be processed. The timing estimates for the algorithm are expected to be below 5 μs, well within the requirements of the L1 trigger at CMS for track identification. The algorithm is integer-based, allowing it to be implemented on an FPGA. Currently we are working on a demonstrator hardware implementation using a Xilinx Virtex 6 FPGA. Results from simulations in C++ and Verilog are presented to show the algorithm performance in terms of data throughput and parameter resolution.

  16. Level Zero Trigger processor for the ultra rare kaon decay experiment—NA62

    NASA Astrophysics Data System (ADS)

    Chiozzi, S.; Gamberini, E.; Gianoli, A.; Mila, G.; Neri, I.; Petrucci, F.; Soldi, D.

    2016-02-01

    In the NA62 experiment at CERN-SPS the communication between detectors and the Lowest Level (L0) trigger processor is performed via Ethernet packets, using the UDP protocol. The L0 Trigger Processor handles the signals from sub-detectors that take part to the trigger generation. In order to choose the best solution for its realization, two different approaches have been implemented. The first approach is fully based on a FPGA device while the second one joins an off-the-shelf PC to the FPGA. The performance of the two systems will be discussed and compared.

  17. The CMS Level-1 Trigger Barrel Track Finder

    NASA Astrophysics Data System (ADS)

    Ero, J.; Evangelou, I.; Flouris, G.; Foudas, C.; Guiducci, L.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Sotiropoulos, S.; Sphicas, P.; Triossi, A.; Wulz, C.

    2016-03-01

    The design and performance of the upgraded CMS Level-1 Trigger Barrel Muon Track Finder (BMTF) is presented. Monte Carlo simulation data as well as cosmic ray data from a CMS muon detector slice test have been used to study in detail the performance of the new track finder. The design architecture is based on twelve MP7 cards each of which uses a Xilinx Virtex-7 FPGA and can receive and transmit data at 10 Gbps from 72 input and 72 output fibers. According to the CMS Trigger Upgrade TDR the BMTF receives trigger primitive data which are computed using both RPC and DT data and transmits data from a number of muon candidates to the upgraded Global Muon Trigger. Results from detailed studies of comparisons between the BMTF algorithm results and the results of a C++ emulator are also presented. The new BMTF will be commissioned for data taking in 2016.

  18. An FPGA-based trigger for the phase II of the MEG experiment

    NASA Astrophysics Data System (ADS)

    Baldini, A.; Bemporad, C.; Cei, F.; Galli, L.; Grassi, M.; Morsani, F.; Nicolò, D.; Ritt, S.; Venturini, M.

    2016-07-01

    For the phase II of MEG, we are going to develop a combined trigger and DAQ system. Here we focus on the former side, which operates an on-line reconstruction of detector signals and event selection within 450 μs from event occurrence. Trigger concentrator boards (TCB) are under development to gather data from different crates, each connected to a set of detector channels, to accomplish higher-level algorithms to issue a trigger in the case of a candidate signal event. We describe the major features of the new system, in comparison with phase I, as well as its performances in terms of selection efficiency and background rejection.

  19. CDF level 2 trigger upgrade

    SciTech Connect

    Anikeev, K.; Bogdan, M.; DeMaat, R.; Fedorko, W.; Frisch, H.; Hahn, K.; Hakala, M.; Keener, P.; Kim, Y.; Kroll, J.; Kwang, S.; Lewis, J.; Lin, C.; Liu, T.; Marjamaa, F.; Mansikkala, T.; Neu, C.; Pitkanen, S.; Reisert, B.; Rusu, V.; Sanders, H.; /Fermilab /Chicago U. /Pennsylvania U.

    2006-01-01

    We describe the new CDF Level 2 Trigger, which was commissioned during Spring 2005. The upgrade was necessitated by several factors that included increased bandwidth requirements, in view of the growing instantaneous luminosity of the Tevatron, and the need for a more robust system, since the older system was reaching the limits of maintainability. The challenges in designing the new system were interfacing with many different upstream detector subsystems, processing larger volumes of data at higher speed, and minimizing the impact on running the CDF experiment during the system commissioning phase. To meet these challenges, the new system was designed around a general purpose motherboard, the PULSAR, which is instrumented with powerful FPGAs and modern SRAMs, and which uses mezzanine cards to interface with upstream detector components and an industry standard data link (S-LINK) within the system.

  20. The ALICE High Level Trigger: status and plans

    NASA Astrophysics Data System (ADS)

    Krzewicki, Mikolaj; Rohr, David; Gorbunov, Sergey; Breitner, Timo; Lehrbach, Johannes; Lindenstruth, Volker; Berzano, Dario

    2015-12-01

    The ALICE High Level Trigger (HLT) is an online reconstruction, triggering and data compression system used in the ALICE experiment at CERN. Unique among the LHC experiments, it extensively uses modern coprocessor technologies like general purpose graphic processing units (GPGPU) and field programmable gate arrays (FPGA) in the data flow. Realtime data compression is performed using a cluster finder algorithm implemented on FPGA boards. These data, instead of raw clusters, are used in the subsequent processing and storage, resulting in a compression factor of around 4. Track finding is performed using a cellular automaton and a Kalman filter algorithm on GPGPU hardware, where both CUDA and OpenCL technologies can be used interchangeably. The ALICE upgrade requires further development of online concepts to include detector calibration and stronger data compression. The current HLT farm will be used as a test bed for online calibration and both synchronous and asynchronous processing frameworks already before the upgrade, during Run 2. For opportunistic use as a Grid computing site during periods of inactivity of the experiment a virtualisation based setup is deployed.

  1. Wide dynamic range FPGA-based TDC for monitoring a trigger timing distribution system in linear accelerators

    NASA Astrophysics Data System (ADS)

    Suwada, T.; Miyahara, F.; Furukawa, K.; Shoji, M.; Ikeno, M.; Tanaka, M.

    2015-06-01

    A new field-programmable gate array (FPGA)-based time-to-digital converter (TDC) with a wide dynamic range greater than 20 ms has been developed to monitor the timing of various pulsed devices in the trigger timing distribution system of the KEKB injector linac for the Super KEK B-factory project. The pulsed devices are driven by feeding regular as well as any irregular (or event-based) timing pulses. The timing pulses are distributed to these pulsed devices along the linac beam line with fiber-optic links on the basis of the parameters to be set pulse-by-pulse in the event-based timing and control system within 20 ms. For monitoring the timing as precisely as possible, a 16-ch FPGA-based TDC has been developed on a Xilinx Spartan-6 FPGA equipped on VME board with a resolution of 1 ns. The resolution was achieved by applying a multisampling technique, and the accuracies were 2.6 ns (rms) and less than 1 ns (rms) within the dynamic ranges of 20 ms and 7.5 ms, respectively. The various nonlinear effects were improved by implementing a high-precision external clock with a built-in temperature-compensated crystal oscillator.

  2. The CDF LEVEL3 trigger

    SciTech Connect

    Carroll, T.; Joshi, U.; Auchincloss, P.

    1989-04-01

    CDF is currently taking data at a luminosity of 10{sup 30} cm{sup -2} sec{sup -1} using a four level event filtering scheme. The fourth level, LEVEL3, uses ACP (Fermilab`s Advanced Computer Program) designed 32 bit VME based parallel processors (1) capable of executing algorithms written in FORTRAN. LEVEL3 currently rejects about 50% of the events.

  3. An FPGA-based trigger system for the search of μ+→e++γ decay in the MEG experiment

    NASA Astrophysics Data System (ADS)

    Galli, L.; Cei, F.; Galeotti, S.; Magazzù, C.; Morsani, F.; Nicolò, D.; Signorelli, G.; Grassi, M.

    2013-01-01

    The MEG experiment at PSI aims at investigating the μ+ → e+ + γ decay with improved sensitivity on the branching ratio (BR) by two orders of magnitude with respect to the previous experimental limit (BR(μ+ → e+ + γ) ≈ 10-13). The use of the most intense continuous muon beam world wide ( ≈ 108μ/s) to search for such a rare event must be accompanied by an efficient trigger system, able to suppress the huge beam-related background to sustainable rates while preserving the efficiency on signal close to unity. In order to accomplish both objectives, a digital approach was exploited by means of Field Programmable Gate Arrays (FPGA), working as a real-time processors of detector signals to perform an accurate event reconstruction within a 450 ns latency. This approach eventually turned out to be flexible enough to allow us to record calibration events in parallel with the main data acquisition and monitor the detector behavior throughout the data taking. We describe here the hardware implementation of the trigger and its main features as well: signal digitization, online waveform processing, reconstruction algorithms. A detailed description is given of the system architecture, the feature of the boards and their use. The trigger algorithms will be described in details in a dedicated article to be published afterwards.

  4. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets. PMID:25965680

  5. BTeV level 1 vertex trigger

    SciTech Connect

    Michael H.L.S. Wang

    2001-11-05

    BTeV is a B-physics experiment that expects to begin collecting data at the C0 interaction region of the Fermilab Tevatron in the year 2006. Its primary goal is to achieve unprecedented levels of sensitivity in the study of CP violation, mixing, and rare decays in b and c quark systems. In order to realize this, it will employ a state-of-the-art first-level vertex trigger (Level 1) that will look at every beam crossing to identify detached secondary vertices that provide evidence for heavy quark decays. This talk will briefly describe the BTeV detector and trigger, focus on the software and hardware aspects of the Level 1 vertex trigger, and describe work currently being done in these areas.

  6. The Zeus calorimeter first level trigger

    SciTech Connect

    Smith, W.J.

    1989-04-01

    The design of the Zeus Detector Calorimeter Level Trigger is presented. The Zeus detector is being built for operation at HERA, a new storage ring that will provide collisions between 820 GeV protons and 30 GeV electrons in 1990. The calorimeter is made of depleted uranium plates and plastic scintillator read out by wavelength shifter bars into 12,864 photomultiplier tubes. These signals are combined into 974 trigger towers with separate electromagnetic and hadronic sums. The calorimeter first level trigger is pipelined with a decision provided 5 {mu}sec after each beam crossing, occurring every 96 nsec. The trigger determines the total energy, the total transverse energy, the missing energy, and the energy and number of isolated electrons and muons. It also provides information on the number and energy of clusters. The trigger rate needs to be held to 1 kHz against a rate of proton-beam gas interactions of approximately 500 kHz. The summed trigger tower pulseheights are digitized by flash ADC`s. The digital values are linearized, stored and used for sums and pattern tests.

  7. The CMS High-Level Trigger

    SciTech Connect

    Covarelli, R.

    2009-12-17

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the 'High-Level Trigger'(HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, {tau} leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  8. Performance of the CMS High Level Trigger

    NASA Astrophysics Data System (ADS)

    Perrotta, Andrea

    2015-12-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved tracking and vertexing algorithms, discussing their impact on the b-tagging performance as well as on the jet and missing energy reconstruction.

  9. THE STAR LEVEL-3 TRIGGER SYSTEM.

    SciTech Connect

    LANGE, J.S.; ADLER, C.; BERGER, J.; DEMELLO, M.; FLIERL, D.; ET AL

    1999-11-15

    The STAR level-3 trigger is a MYRINET interconnected ALPHA processor farm, performing online tracking of N{sub track} {ge} 8000 particles (N{sub point} {le} 45 per track) with a design input rate of R=100 Hz. A large scale prototype system was tested in 12/99 with laser and cosmic particle events.

  10. Level-2 Calorimeter Trigger Upgrade at CDF

    SciTech Connect

    Flanagan, G.U.; /Purdue U.

    2007-04-01

    The CDF Run II Level-2 calorimeter trigger is implemented in hardware and is based on an algorithm used in Run I. This system insured good performance at low luminosity obtained during the Tevatron Run II. However, as the Tevatron instantaneous luminosity increases, the limitations of the current system due to the algorithm start to become clear. In this paper, we will present an upgrade of the Level-2 calorimeter trigger system at CDF. The upgrade is based on the Pulsar board, a general purpose VME board developed at CDF and used for upgrading both the Level-2 tracking and the Level-2 global decision crate. This paper will describe the design, hardware and software implementation, as well as the advantages of this approach over the existing system.

  11. CMS High Level Trigger Timing Measurements

    NASA Astrophysics Data System (ADS)

    Richardson, Clint

    2015-12-01

    The two-level trigger system employed by CMS consists of the Level 1 (L1) Trigger, which is implemented using custom-built electronics, and the High Level Trigger (HLT), a farm of commercial CPUs running a streamlined version of the offline CMS reconstruction software. The operational L1 output rate of 100 kHz, together with the number of CPUs in the HLT farm, imposes a fundamental constraint on the amount of time available for the HLT to process events. Exceeding this limit impacts the experiment's ability to collect data efficiently. Hence, there is a critical need to characterize the performance of the HLT farm as well as the algorithms run prior to start up in order to ensure optimal data taking. Additional complications arise from the fact that the HLT farm consists of multiple generations of hardware and there can be subtleties in machine performance. We present our methods of measuring the timing performance of the CMS HLT, including the challenges of making such measurements. Results for the performance of various Intel Xeon architectures from 2009-2014 and different data taking scenarios are also presented.

  12. The ALICE high-level trigger read-out upgrade for LHC Run 2

    NASA Astrophysics Data System (ADS)

    Engel, H.; Alt, T.; Breitner, T.; Gomez Ramirez, A.; Kollegger, T.; Krzewicki, M.; Lehrbach, J.; Rohr, D.; Kebschull, U.

    2016-01-01

    The ALICE experiment uses an optical read-out protocol called Detector Data Link (DDL) to connect the detectors with the computing clusters of Data Acquisition (DAQ) and High-Level Trigger (HLT). The interfaces of the clusters to these optical links are realized with FPGA-based PCI-Express boards. The High-Level Trigger is a computing cluster dedicated to the online reconstruction and compression of experimental data. It uses a combination of CPU, GPU and FPGA processing. For Run 2, the HLT has replaced all of its previous interface boards with the Common Read-Out Receiver Card (C-RORC) to enable read-out of detectors at high link rates and to extend the pre-processing capabilities of the cluster. The new hardware also comes with an increased link density that reduces the number of boards required. A modular firmware approach allows different processing and transport tasks to be built from the same source tree. A hardware pre-processing core includes cluster finding already in the C-RORC firmware. State of the art interfaces and memory allocation schemes enable a transparent integration of the C-RORC into the existing HLT software infrastructure. Common cluster management and monitoring frameworks are used to also handle C-RORC metrics. The C-RORC is in use in the clusters of ALICE DAQ and HLT since the start of LHC Run 2.

  13. NaNet-10: a 10GbE network interface card for the GPU-based low-level trigger of the NA62 RICH detector.

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Fiorini, M.; Frezza, O.; Lonardo, A.; Lamanna, G.; Lo Cicero, F.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Tosoratto, L.; Vicini, P.

    2016-03-01

    A GPU-based low level (L0) trigger is currently integrated in the experimental setup of the RICH detector of the NA62 experiment to assess the feasibility of building more refined physics-related trigger primitives and thus improve the trigger discriminating power. To ensure the real-time operation of the system, a dedicated data transport mechanism has been implemented: an FPGA-based Network Interface Card (NaNet-10) receives data from detectors and forwards them with low, predictable latency to the memory of the GPU performing the trigger algorithms. Results of the ring-shaped hit patterns reconstruction will be reported and discussed.

  14. Global Trigger Upgrade firmware architecture for the level-1 Trigger of the CMS experiment

    NASA Astrophysics Data System (ADS)

    Rahbaran, B.; Arnold, B.; Bergauer, H.; Wittmann, J.; Matsushita, T.

    2015-02-01

    The Global Trigger (GT) is the final step of the CMS Level-1 Trigger and implements the ``menu'' of triggers, which is a set of selection requirements applied to the final list of objects (such as muons, electrons or jets) to trigger the readout of the detector and serve as basis for further calculations by the High Level Trigger. Operational experience in developing trigger menus from the first LHC run has shown that the requirements increased as the luminosity and pile-up increased. The new GT (μGT) is designed based on Xilinx Virtex-7 FPGAs, which combine unsurpassed flexibility with regard to scalability and high robustness. Furthermore, a custom board which receives signals from legacy electronics and basic binary inputs from less complex trigger sources is presented. Additionally, this paper describes the architecture of a distributed testing framework and the Trigger Menu Editor.

  15. Upgrade of the ATLAS Level-1 Trigger with event topology information

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Artz, S.; Bauβ, B.; Büscher, V.; Jakobi, K.; Kaluza, A.; Kahra, C.; Palka, M.; Reiβ, A.; Schäffer, J.; Schäfer, U.; Schulte, A.; Simon, M.; Tapprogge, S.; Vogel, A.; Zinser, M.

    2015-12-01

    The Large Hadron Collider (LHC) in 2015 will collide proton beams with increased luminosity from 1034 up to 3 × 1034cm-2s-1. ATLAS is an LHC experiment designed to measure decay properties of high energetic particles produced in the protons collisions. The higher luminosity places stringent operational and physical requirements on the ATLAS Trigger in order to reduce the 40MHz collision rate to a manageable event storage rate of 1kHz while at the same time, selecting those events with valuable physics meaning. The Level-1 Trigger is the first rate-reducing step in the ATLAS Trigger, with an output rate of 100kHz and decision latency of less than 2.5µs. It is composed of the Calorimeter Trigger (L1Calo), the Muon Trigger (L1Muon) and the Central Trigger Processor (CTP). By 2015, there will be a new electronics element in the chain: the Topological Processor System (L1Topo system). The L1Topo system consist of a single AdvancedTCA shelf equipped with three L1Topo processor blades. It will make it possible to use detailed information from L1Calo and L1Muon processed in individual state-of-the-art FPGA processors. This allows the determination of angles between jets and/or leptons and calculates kinematic variables based on lists of selected/sorted objects. The system is designed to receive and process up to 6Tb/s of real time data. The paper reports the relevant upgrades of the Level-1 trigger with focus on the topological processor design and commissioning.

  16. 76 FR 31295 - WTO Agricultural Safeguard Trigger Levels

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-31

    ... Secretary of Agriculture in Presidential Proclamation No. 6763, dated December 23, 1994, 60 FR 1005 (Jan. 4... Trigger Levels, published in the Federal Register at 60 FR 427 (Jan. 4, 1995). Notice: As provided in... Round Agricultural Safeguard Trigger Levels published in the Federal Register, at 60 FR 427 (Jan....

  17. High Level Trigger Configuration and Handling of Trigger Tables in the CMS Filter Farm

    SciTech Connect

    Bauer, G; Behrens, U; Boyer, V; Branson, J; Brett, A; Cano, E; Carboni, A; Ciganek, M; Cittolin, S; O'dell, V; Erhan, S; Gigi, D; Glege, F; Gomez-Reino, R; Gulmini, M; Gutleber, J; Hollar, J; Lange, D; Kim, J C; Klute, M; Lipeles, E; Perez, J L; Maron, G; Meijers, F; Meschi, E; Moser, R; Mlot, E G; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Racz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2009-11-22

    The CMS experiment at the CERN Large Hadron Collider is currently being commissioned and is scheduled to collect the first pp collision data in 2008. CMS features a two-level trigger system. The Level-1 trigger, based on custom hardware, is designed to reduce the collision rate of 40 MHz to approximately 100 kHz. Data for events accepted by the Level-1 trigger are read out and assembled by an Event Builder. The High Level Trigger (HLT) employs a set of sophisticated software algorithms, to analyze the complete event information, and further reduce the accepted event rate for permanent storage and analysis. This paper describes the design and implementation of the HLT Configuration Management system. First experiences with commissioning of the HLT system are also reported.

  18. The upgrade of the ATLAS first-level calorimeter trigger

    NASA Astrophysics Data System (ADS)

    Yamamoto, Shimpei

    2016-07-01

    The first-level calorimeter trigger (L1Calo) had operated successfully through the first data taking phase of the ATLAS experiment at the CERN Large Hadron Collider. Towards forthcoming LHC runs, a series of upgrades is planned for L1Calo to face new challenges posed by the upcoming increases of the beam energy and the luminosity. This paper reviews the ATLAS L1Calo trigger upgrade project that introduces new architectures for the liquid-argon calorimeter trigger readout and the L1Calo trigger processing system.

  19. CROC FPGA Firmware

    Energy Science and Technology Software Center (ESTSC)

    2009-12-01

    The CROC FPGA firmware code controls the operation of CROC hardware primarily deterinining the location of neutron events and discriminating against false trigger by examining the output of multiple analog comparators. A number of stoical algorithms are encode within the firmware to achieve reliable operation. Other communication and control functions are also part of the firmware.

  20. HERA-B higher-level triggers: architecture and software

    NASA Astrophysics Data System (ADS)

    Gellrich, Andreas; Medinnis, Mike

    1998-02-01

    HERA-B will be studying CP-violation in the B-system in a high-rate hadronic environment. To accomplish this goal, HERA-B needs a sophisticated data acquisition and trigger system. Except for the first level, all trigger levels are implemented as PC-farms, running the Unix-like operating system, Linux, thus blurring the sharp border between online and offline application software. The hardware architecture and software environments are discussed.

  1. A fast, first level, R{sub {phi}}, hardware trigger for the DO central fiber tracker using field programmable gate arrays

    SciTech Connect

    Abbott, B.; Angstadt, B.; Borcherding, F.

    1996-12-31

    We have developed an R{phi} trigger using the eight doublet layers of axial fibers in the new Central Fiber Tracker for the DO Upgrade Detector at Fermilab. This trigger must be formed in less than 500 nsec and distributed to other parts of the detector for a level 1 trigger decision. The high speed is achieved by using massively parallel AND/OR logic instanced in state-or-the-art field programmable gate arrays, FPGAs. The programmability of the FPGAs allows corrections to the track roads for the as-built detector and for dynamically changing the transverse momentum threshold. To reduce the number of fake tracks at high luminosity the narrowest possible roads must be used which pushes the total number of roads into the thousands. Monte Carlo simulations of the track trigger have been run to develop the trigger algorithms and a vendor supplied simulator has been used to develop and test the FPGA programming.

  2. Operation of the Upgraded ATLAS Level-1 Central Trigger System

    NASA Astrophysics Data System (ADS)

    Glatzer, Julian

    2015-12-01

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking and has undergone a major upgrade for Run 2 of the LHC, in order to cope with the expected increase of instantaneous luminosity of a factor of two with respect to Run 1. The upgraded hardware offers more flexibility in the trigger decisions due to the factor of two increase in the number of trigger inputs and usable trigger channels. It also provides an interface to the new topological trigger system. Operationally - particularly useful for commissioning, calibration and test runs - it allows concurrent running of up to three different subdetector combinations. An overview of the operational software framework of the L1CT system with particular emphasis on the configuration, controls and monitoring aspects is given. The software framework allows a consistent configuration with respect to the ATLAS experiment and the LHC machine, upstream and downstream trigger processors, and the data acquisition system. Trigger and dead-time rates are monitored coherently at all stages of processing and are logged by the online computing system for physics analysis, data quality assurance and operational debugging. In addition, the synchronisation of trigger inputs is watched based on bunch-by-bunch trigger information. Several software tools allow for efficient display of the relevant information in the control room in a way useful for shifters and experts. The design of the framework aims at reliability, flexibility, and robustness of the system and takes into account the operational experience gained during Run 1. The Level-1 Central Trigger was successfully operated with high efficiency during the cosmic-ray, beam-splash and first Run 2 data taking with the full ATLAS detector.

  3. The Level-0 calorimetric trigger of the NA62 experiment

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Barbanera, M.; Bizzarri, M.; Bonaiuto, V.; Ceccucci, A.; Checcucci, B.; De Simone, N.; Fantechi, R.; Federici, L.; Fucci, A.; Lupi, M.; Paoluzzi, G.; Papi, A.; Piccini, M.; Ryjov, V.; Salamon, A.; Salina, G.; Sargeni, F.; Venditti, S.

    2016-02-01

    The NA62 experiment at the CERN SPS aims at measuring the branching ratio of the very rare kaon decay K+ → π+ ν bar nu (expected 10-10) with a 10% background. Since an high-intensity kaon beam is required to collect enough statistics, the Level-0 trigger plays a fundamental role in both the background rejection and in the particle identification. The calorimetric trigger collects data from various calorimeters and it is able to identify clusters of energy deposit and determine their position, fine-time and energy. This paper describes the complete hardware commisioning and the setup of the trigger for the 2015 physics data taking.

  4. Analyzing trigger levels for future test ban treaties

    SciTech Connect

    Miller, A.C. )

    1989-11-01

    Future test ban treaties may include the idea of triggers: test yields above which the country conducting a test must give the other side prior notice. The treaty would then allow the other side to inspect the test site or install additional yield measurement devices (e.g., CORRTEX). Triggers should help both sides verify treaty compliance when they conduct tests near the treaty threshold, a yield above which both sides are prohibited from testing. By using more accurate measurement systems at yields near the threshold, the country verifying compliance can decrease its uncertainty about the test yield. We can model the effect of triggers at various levels similar to the levels of analysis considered in the compliance evaluation report ( Decision Framework for Evaluating Compliance with the Threshold Test Ban Treaty'' by B.R. Judd, L.W. Younker, W.J. Hannon, R.S. Strait, P.C. Meagher, and A. Sicherman, UCRL-53830, August 1988). At lower levels, the analysis is simple but does not include all the issues relevant to choosing a trigger level. Higher levels of the analysis incorporate these issues, at the cost of additional complexity and required data. This memorandum considers several models of trigger levels to show what we can learn by incorporating additional issues into the analysis. 60 figs.

  5. The ATLAS Data Acquisition and High Level Trigger system

    NASA Astrophysics Data System (ADS)

    The ATLAS TDAQ Collaboration

    2016-06-01

    This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

  6. The second level TOF trigger for the Obelix experiment

    SciTech Connect

    Lai, A.; Musa, L.; Serci, S. )

    1994-08-01

    The authors describe a CAMAC module, named TRIG-GERK, performing the function of second level trigger in the TOF apparatus of the PS201 experiment at CERN. The module is realized on a single board and is implemented by using three XILINX PGAs. The internal architecture of the module as well as its programming capabilities and mode of usage is described.

  7. A massively parallel track-finding system for the LEVEL 2 trigger in the CLAS detector at CEBAF

    SciTech Connect

    Doughty, D.C. Jr.; Collins, P.; Lemon, S. ); Bonneau, P. )

    1994-02-01

    The track segment finding subsystem of the LEVEL 2 trigger in the CLAS detector has been designed and prototyped. Track segments will be found in the 35,076 wires of the drift chambers using a massively parallel array of 768 Xilinx XC-4005 FPGA's. These FPGA's are located on daughter cards attached to the front-end boards distributed around the detector. Each chip is responsible for finding tracks passing through a 4 x 6 slice of an axial superlayer, and reports two segment found bits, one for each pair of cells. The algorithm used finds segments even when one or two layers or cells along the track is missing (this number is programmable), while being highly resistant to false segments arising from noise hits. Adjacent chips share data to find tracks crossing cell and board boundaries. For maximum speed, fully combinatorial logic is used inside each chip, with the result that all segments in the detector are found within 150 ns. Segment collection boards gather track segments from each axial superlayer and pass them via a high speed link to the segment linking subsystem in an additional 400 ns for typical events. The Xilinx chips are ram-based and therefore reprogrammable, allowing for future upgrades and algorithm enhancements.

  8. FPGA Design Practices for I&C in Nuclear Power Plants

    SciTech Connect

    Bobrek, Miljko; Wood, Richard Thomas; Bouldin, Donald; Waterman, Michael E

    2009-01-01

    Safe FPGA design practices can be classified into three major groups covering board-level and FPGA logic-level design practices, FPGA design entry methods, and FPGA design methodology. This paper is presenting the most common hardware and software design practices that are acceptable in safety-critical FPGA systems. It also proposes an FPGA-specific design life cycle including design entry, FPGA synthesis, place and route, and validation and verification.

  9. The CMS High Level Trigger System: Experience and Future Development

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.

    2012-12-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  10. Implementation And Performance of the ATLAS Second Level Jet Trigger

    SciTech Connect

    Conde Muino, Patricia; Aracena, I.; Brelier, B.; Cranmer, K.; Delsart, P.A.; Dufour, M.A.; Eckweiler, S.; Ferland, J.; Idarraga, J.; Johns, K.; LeCompte, T.; Potter, C.; Robertson, S.; Santamarina Rios, C.; Segura, E.; Silverstein, D.; Vachon, B.; /McGill U.

    2011-11-09

    ATLAS is one of the four major LHC experiments, designed to cover a wide range of physics topics. In order to cope with a rate of 40MHz and 25 interactions per bunch crossing, the ATLAS trigger system is divided in three different levels. The jet selection starts at first level with dedicated processors that search for high E{sub T} hadronic energy depositions. At the LVL2, the jet signatures are verified with the execution of a dedicated, fast jet reconstruction algorithm, followed by a calibration algorithm. Three possible granularities have been proposed and are being evaluated: cell based (standard), energy sums calculated at each Front-End Board and the use of the LVL1 Trigger Towers. In this presentation, the design and implementation of the jet trigger of ATLAS will be discussed in detail, emphasazing the major difficulties of each selection step. The performance of the jet algorithm, including timing, efficiencies and rates will also be shown, with detailed comparisons of the different unpacking modes.

  11. The LHCb Data Acquisition and High Level Trigger Processing Architecture

    NASA Astrophysics Data System (ADS)

    Frank, M.; Gaspar, C.; Jost, B.; Neufeld, N.

    2015-12-01

    The LHCb experiment at the LHC accelerator at CERN collects collisions of particle bunches at 40 MHz. After a first level of hardware trigger with an output rate of 1 MHz, the physically interesting collisions are selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. This farm consists of up to roughly 25000 CPU cores in roughly 1750 physical nodes each equipped with up to 4 TB local storage space. This work describes the LHCb online system with an emphasis on the developments implemented during the current long shutdown (LS1). We will elaborate the architecture to treble the available CPU power of the HLT farm and the technicalities to determine and verify precise calibration and alignment constants which are fed to the HLT event selection procedure. We will describe how the constants are fed into a two stage HLT event selection facility using extensively the local disk buffering capabilities on the worker nodes. With the installed disk buffers, the CPU resources can be used during periods of up to ten days without beams. These periods in the past accounted to more than 70% of the total time.

  12. Using the CMS High Level Trigger as a Cloud Resource

    NASA Astrophysics Data System (ADS)

    Colling, David; Huffman, Adam; McCrae, Alison; Lahiff, Andrew; Grandi, Claudio; Cinquilli, Mattia; Gowdy, Stephen; Coarasa, Jose Antonio; Tiradani, Anthony; Ozga, Wojciech; Chaze, Olivier; Sgaravatto, Massimo; Bauer, Daniela

    2014-06-01

    The CMS High Level Trigger is a compute farm of more than 10,000 cores. During data taking this resource is heavily used and is an integral part of the experiment's triggering system. However, outside of data taking periods this resource is largely unused. We describe why CMS wants to use the HLT as a cloud resource (outside of data taking periods) and how this has been achieved. In doing this we have turned a single-use cluster into an agile resource for CMS production computing. While we are able to use the HLT as a production cloud resource, there is still considerable further work that CMS needs to carry out before this resource can be used with the desired agility. This report, therefore, represents a snapshot of this activity at the time of CHEP 2013.

  13. Run 2 upgrades to the CMS Level-1 calorimeter trigger

    NASA Astrophysics Data System (ADS)

    Kreis, B.; Berryhill, J.; Cavanaugh, R.; Mishra, K.; Rivera, R.; Uplegger, L.; Apanasevich, L.; Zhang, J.; Marrouche, J.; Wardle, N.; Aggleton, R.; Ball, F.; Brooke, J.; Newbold, D.; Paramesvaran, S.; Smith, D.; Baber, M.; Bundock, A.; Citron, M.; Elwood, A.; Hall, G.; Iles, G.; Laner, C.; Penning, B.; Rose, A.; Tapper, A.; Foudas, C.; Beaudette, F.; Cadamuro, L.; Mastrolorenzo, L.; Romanteau, T.; Sauvan, J. B.; Strebler, T.; Zabi, A.; Barbieri, R.; Cali, I. A.; Innocenti, G. M.; Lee, Y.-J.; Roland, C.; Wyslouch, B.; Guilbaud, M.; Li, W.; Northup, M.; Tran, B.; Durkin, T.; Harder, K.; Harper, S.; Shepherd-Themistocleous, C.; Thea, A.; Williams, T.; Cepeda, M.; Dasu, S.; Dodd, L.; Forbes, R.; Gorski, T.; Klabbers, P.; Levine, A.; Ojalvo, I.; Ruggles, T.; Smith, N.; Smith, W.; Svetek, A.; Tikalsky, J.; Vicente, M.

    2016-01-01

    The CMS Level-1 calorimeter trigger is being upgraded in two stages to maintain performance as the LHC increases pile-up and instantaneous luminosity in its second run. In the first stage, improved algorithms including event-by-event pile-up corrections are used. New algorithms for heavy ion running have also been developed. In the second stage, higher granularity inputs and a time-multiplexed approach allow for improved position and energy resolution. Data processing in both stages of the upgrade is performed with new, Xilinx Virtex-7 based AMC cards.

  14. Can increased atmospheric CO2 levels trigger a runaway greenhouse?

    PubMed

    Ramirez, Ramses M; Kopparapu, Ravi Kumar; Lindner, Valerie; Kasting, James F

    2014-08-01

    Recent one-dimensional (globally averaged) climate model calculations by Goldblatt et al. (2013) suggest that increased atmospheric CO(2) could conceivably trigger a runaway greenhouse on present Earth if CO(2) concentrations were approximately 100 times higher than they are today. The new prediction runs contrary to previous calculations by Kasting and Ackerman (1986), which indicated that CO(2) increases could not trigger a runaway, even at Venus-like CO(2) concentrations. Goldblatt et al. argued that this different behavior is a consequence of updated absorption coefficients for H(2)O that make a runaway more likely. Here, we use a 1-D climate model with similar, up-to-date absorption coefficients, but employ a different methodology, to show that the older result is probably still valid, although our model nearly runs away at ∼12 preindustrial atmospheric levels of CO(2) when we use the most alarmist assumptions possible. However, we argue that Earth's real climate is probably stable given more realistic assumptions, although 3-D climate models will be required to verify this result. Potential CO(2) increases from fossil fuel burning are somewhat smaller than this, 10-fold or less, but such increases could still cause sufficient warming to make much of the planet uninhabitable by humans. PMID:25061956

  15. Method for modifying trigger level for adsorber regeneration

    DOEpatents

    Ruth, Michael J.; Cunningham, Michael J.

    2010-05-25

    A method for modifying a NO.sub.x adsorber regeneration triggering variable. Engine operating conditions are monitored until the regeneration triggering variable is met. The adsorber is regenerated and the adsorbtion efficiency of the adsorber is subsequently determined. The regeneration triggering variable is modified to correspond with the decline in adsorber efficiency. The adsorber efficiency may be determined using an empirically predetermined set of values or by using a pair of oxygen sensors to determine the oxygen response delay across the sensors.

  16. Electrophysiological characteristics according to activity level of myofascial trigger points

    PubMed Central

    Yu, Seong Hun; Kim, Hyun Jin

    2015-01-01

    [Purpose] This study compared the differences in electrophysiological characteristics of normal muscles versus muscles with latent or active myofascial trigger points, and identified the neuromuscular physiological characteristics of muscles with active myofascial trigger points, thereby providing a quantitative evaluation of myofascial pain syndrome and clinical foundational data for its diagnosis. [Subjects] Ninety adults in their 20s participated in this study. Subjects were equally divided into three groups: the active myofascial trigger point group, the latent myofascial trigger point group, and the control group. [Methods] Maximum voluntary isometric contraction (MVIC), endurance, median frequency (MDF), and muscle fatigue index were measured in all subjects. [Results] No significant differences in MVIC or endurance were revealed among the three groups. However, the active trigger point group had significantly different MDF and muscle fatigue index compared with the control group. [Conclusion] Given that muscles with active myofascial trigger points had an increased MDF and suffered muscle fatigue more easily, increased recruitment of motor unit action potential of type II fibers was evident. Therefore, electrophysiological analysis of these myofascial trigger points can be applied to evaluate the effect of physical therapy and provide a quantitative diagnosis of myofascial pain syndrome. PMID:26504306

  17. Electrophysiological characteristics according to activity level of myofascial trigger points.

    PubMed

    Yu, Seong Hun; Kim, Hyun Jin

    2015-09-01

    [Purpose] This study compared the differences in electrophysiological characteristics of normal muscles versus muscles with latent or active myofascial trigger points, and identified the neuromuscular physiological characteristics of muscles with active myofascial trigger points, thereby providing a quantitative evaluation of myofascial pain syndrome and clinical foundational data for its diagnosis. [Subjects] Ninety adults in their 20s participated in this study. Subjects were equally divided into three groups: the active myofascial trigger point group, the latent myofascial trigger point group, and the control group. [Methods] Maximum voluntary isometric contraction (MVIC), endurance, median frequency (MDF), and muscle fatigue index were measured in all subjects. [Results] No significant differences in MVIC or endurance were revealed among the three groups. However, the active trigger point group had significantly different MDF and muscle fatigue index compared with the control group. [Conclusion] Given that muscles with active myofascial trigger points had an increased MDF and suffered muscle fatigue more easily, increased recruitment of motor unit action potential of type II fibers was evident. Therefore, electrophysiological analysis of these myofascial trigger points can be applied to evaluate the effect of physical therapy and provide a quantitative diagnosis of myofascial pain syndrome. PMID:26504306

  18. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    PubMed Central

    Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069

  19. The upgrade of the CMS Global Trigger

    NASA Astrophysics Data System (ADS)

    Wittmann, J.; Arnold, B.; Bergauer, H.; Jeitler, M.; Matsushita, T.; Rabady, D.; Rahbaran, B.; Wulz, C.-E.

    2016-02-01

    The Global Trigger is the final step of the CMS Level-1 Trigger. Previously implemented in VME, it has been redesigned and completely rebuilt in MicroTCA technology, using the Virtex-7 FPGA chip family. It will allow to implement trigger algorithms close to the final physics selection. The new system is presented, together with performance tests undertaken in parallel operation with the legacy system during the initial months of Run II of the LHC at a beam energy of 13 TeV.

  20. Tracking for the Atlas Level 1 Trigger for the High Luminosity Lhc

    NASA Astrophysics Data System (ADS)

    Sutton, M. R.

    2014-06-01

    At the HL-LHC, the increased luminosity will result in up to 200 pile-up interactions per bunch crossing. One of the greatest challenges for ATLAS will be to keep the Level 1 Trigger pT thresholds low enough to maintain high trigger efficiency for all interesting physics. The proposed two-stage design of the ATLAS Level 1 Trigger, and the incorporation of a Level-1 track trigger is described. The requirements and implications for the tracker readout architecture, and estimates of readout latency based on a detailed discrete event simulation of the data flow in the tracker front-end electronics are also presented.

  1. The ATLAS Level-1 Muon Topological Trigger Information for Run 2 of the LHC

    NASA Astrophysics Data System (ADS)

    Artz, S.; Bauss, B.; Boterenbrood, H.; Buescher, V.; Cerqueira, A. S.; Degele, R.; Dhaliwal, S.; Ellis, N.; Farthouat, P.; Galster, G.; Ghibaudi, M.; Glatzer, J.; Haas, S.; Igonkina, O.; Jakobi, K.; Jansweijer, P.; Kahra, C.; Kaluza, A.; Kaneda, M.; Marzin, A.; Ohm, C.; Silva Oliveira, M. V.; Pauly, T.; Poettgen, R.; Reiss, A.; Schaefer, U.; Schaeffer, J.; Schipper, J. D.; Schmieden, K.; Schreuder, F.; Simioni, E.; Simon, M.; Spiwoks, R.; Stelzer, J.; Tapprogge, S.; Vermeulen, J.; Vogel, A.; Zinser, M.

    2015-02-01

    For the next run of the LHC, the ATLAS Level-1 trigger system will include topological information on trigger objects from the calorimeters and muon detectors. In order to supply coarse grained muon topological information, the existing MUCTPI (Muon-to-Central-Trigger-Processor Interface) system has been upgraded. The MIOCT (Muon Octant) module firmware has been then modified to extract, encode and send topological information through the existing MUCTPI electrical trigger outputs. The topological information from the muon detectors will be sent to the Level-1 Topological Trigger Processor (L1Topo) through the MUCTPI-to-Level-1-Topological-Processor (MuCTPiToTopo) interface. Examples of physics searches involving muons are: search for Lepton Flavour Violation, Bs-physics, Beyond the Standard Model (BSM) physics and others. This paper describes the modifications to the MUCTPI and its integration with the full trigger chain.

  2. Operational experience with the ALICE High Level Trigger

    NASA Astrophysics Data System (ADS)

    Szostak, Artur

    2012-12-01

    The ALICE HLT is a dedicated real-time system for online event reconstruction and triggering. Its main goal is to reduce the raw data volume read from the detectors by an order of magnitude, to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When HLT is enabled, data is recorded only for events selected by HLT. The combination of both approaches allows for flexible data reduction strategies. Event reconstruction places a high computational load on HLT. Thus, a large dedicated computing cluster is required, comprising 248 machines, all interconnected with InfiniBand. Running a large system like HLT in production mode proves to be a challenge. During the 2010 pp and Pb-Pb data-taking period, many problems were experienced that led to a sub-optimal operational efficiency. Lessons were learned and certain crucial changes were made to the architecture and software in preparation for the 2011 Pb-Pb run, in which HLT had a vital role performing data compression for ALICE's largest detector, the TPC. An overview of the status of the HLT and experience from the 2010/2011 production runs are presented. Emphasis is given to the overall performance, showing an improved efficiency and stability in 2011 compared to 2010, attributed to the significant improvements made to the system. Further opportunities for improvement are identified and discussed.

  3. Design exploration and verification platform, based on high-level modeling and FPGA prototyping, for fast and flexible digital communication in physics experiments

    NASA Astrophysics Data System (ADS)

    Magazzù, G.; Borgese, G.; Costantino, N.; Fanucci, L.; Incandela, J.; Saponara, S.

    2013-02-01

    In many research fields as high energy physics (HEP), astrophysics, nuclear medicine or space engineering with harsh operating conditions, the use of fast and flexible digital communication protocols is becoming more and more important. The possibility to have a smart and tested top-down design flow for the design of a new protocol for control/readout of front-end electronics is very useful. To this aim, and to reduce development time, costs and risks, this paper describes an innovative design/verification flow applied as example case study to a new communication protocol called FF-LYNX. After the description of the main FF-LYNX features, the paper presents: the definition of a parametric SystemC-based Integrated Simulation Environment (ISE) for high-level protocol definition and validation; the set up of figure of merits to drive the design space exploration; the use of ISE for early analysis of the achievable performances when adopting the new communication protocol and its interfaces for a new (or upgraded) physics experiment; the design of VHDL IP cores for the TX and RX protocol interfaces; their implementation on a FPGA-based emulator for functional verification and finally the modification of the FPGA-based emulator for testing the ASIC chipset which implements the rad-tolerant protocol interfaces. For every step, significant results will be shown to underline the usefulness of this design and verification approach that can be applied to any new digital protocol development for smart detectors in physics experiments.

  4. A pattern recognition scheme for large curvature circular tracks and an FPGA implementation using hash sorter

    SciTech Connect

    Wu, Jin-Yuan; Shi, Z.; /Fermilab

    2004-12-01

    Strong magnetic field in today's colliding detectors causes track recognition more difficult due to large track curvatures. In this document, we present a global track recognition scheme based on track angle measurements for circular tracks passing the collision point. It uses no approximations in the track equation and therefore is suitable for both large and small curvature tracks. The scheme can be implemented both in hardware for lower-level trigger or in software for higher-level trigger or offline analysis codes. We will discuss an example of FPGA implementations using ''hash sorter''.

  5. High level triggers for explosive mafic volcanism: Albano Maar, Italy

    NASA Astrophysics Data System (ADS)

    Cross, J. K.; Tomlinson, E. L.; Giordano, G.; Smith, V. C.; De Benedetti, A. A.; Roberge, J.; Manning, C. J.; Wulf, S.; Menzies, M. A.

    2014-03-01

    Colli Albani is a quiescent caldera complex located within the Roman Magmatic Province (RMP), Italy. The recent Via dei Laghi phreatomagmatic eruptions led to the formation of nested maars. Albano Maar is the largest and has erupted seven times between ca 69-33 ka. The highly explosive nature of the Albano Maar eruptions is at odds with the predominant relatively mafic (SiO2 = 48-52 wt.%) foiditic (K2O = 9 wt.%) composition of the magma. The deposits have been previously interpreted as phreatomagmatic, however they contain large amounts (up to 30%vol) of deep seated xenoliths, skarns and all pre-volcanic subsurface units. All of the xenoliths have been excavated from depths of up to 6 km, rather than being limited to the depth at which magma and water interaction is likely to have occurred, suggesting an alternative trigger for eruption. High precision geochemical glass and mineral data of fresh juvenile (magmatic) clasts from the small volume explosive deposits indicate that the magmas have evolved along one of two evolutionary paths towards foidite or phonolite. The foiditic melts record ca. 50% mixing between the most primitive magma and Ca-rich melt, late stage prior to eruption. A major result of our study is finding that the generation of Ca-rich melts via assimilation of limestone, may provide storage for significant amounts of CO2 that can be released during a mixing event with silicate magma. Differences in melt evolution are inferred as having been controlled by variations in storage conditions: residence time and magma volume.

  6. The Calorimeter Trigger Processor Card: the next generation of high speed algorithmic data processing at CMS

    NASA Astrophysics Data System (ADS)

    Svetek, A.; Blake, M.; Cepeda Hermida, M.; Dasu, S.; Dodd, L.; Fobes, R.; Gomber, B.; Gorski, T.; Guo, Z.; Klabbers, P.; Levine, A.; Ojalvo, I.; Ruggles, T.; Smith, N.; Smith, W. H.; Tikalsky, J.; Vicente, M.; Woods, N.

    2016-02-01

    The CMS Level-1 upgraded calorimeter trigger requires a powerful, flexible and compact processing card. The Calorimeter Trigger Processor Card (CTP7) uses the Virtex-7 FPGA as its primary data processor and is the first FPGA based processing card in CMS to employ the ZYNQ System-on-Chip (SoC) running embedded Linux to provide TCP/IP communication and board support functions. The CTP7 was built from the ground up to support AXI infrastructure to provide flexible and modular designs with minimal time from project conception to final implementation.

  7. Measuring Science Teachers' Stress Level Triggered by Multiple Stressful Conditions

    ERIC Educational Resources Information Center

    Halim, Lilia; Samsudin, Mohd Ali; Meerah, T. Subahan M.; Osman, Kamisah

    2006-01-01

    The complexity of science teaching requires science teachers to encounter a range of tasks. Some tasks are perceived as stressful while others are not. This study aims to investigate the extent to which different teaching situations lead to different stress levels. It also aims to identify the easiest and most difficult conditions to be regarded…

  8. A 250 MHz Level 1 Trigger and Distribution System for the GlueX experiment

    SciTech Connect

    Abbott, David J.; Cuevas, R. Christopher; Doughty, David Charles; Jastrzembski, Edward A.; Barbosa, Fernando J.; Raydo, Benjamin J.; Dong, Hai T.; Wilson, Jeffrey S.; Gupta, Abishek; Taylor, Mark; Somov, S.

    2009-11-01

    The GlueX detector now under construction at Jefferson Lab will search for exotic mesons though photoproduction (10^8 tagged photons per second) on a liquid hydrogen target. A Level 1 hardware trigger design is being developed to reduce total electromagnetic (>200 MHz) and hadronic (>350 kHz) rates to less than 200 kHz. This trigger is dead timeless and operates on a global synchronized 250 MHz clock. The core of the trigger design is based on a custom pipelined flash ADC board that uses a VXS backplane to collect samples from all ADCs in a VME crate. A custom switch-slot board called a Crate Trigger Processor (CTP) processes this data and passes the crate level data via a multi-lane fiber optic link to the Global Trigger Processing Crate (also VXS). Within this crate detector sub-system processor (SSP) boards can accept all individual crate links. The subsystem data are processed and finally passed to global trigger boards (GTP) where the final L1 decision is made. We present details of the trigger design and report some performance results on current prototype systems.

  9. FPGA Verification Accelerator (FVAX)

    NASA Technical Reports Server (NTRS)

    Oh, Jane; Burke, Gary

    2008-01-01

    Is Verification Acceleration Possible? - Increasing the visibility of the internal nodes of the FPGA results in much faster debug time - Forcing internal signals directly allows a problem condition to be setup very quickly center dot Is this all? - No, this is part of a comprehensive effort to improve the JPL FPGA design and V&V process.

  10. Asynchronous FPGA risks

    NASA Technical Reports Server (NTRS)

    Erickson, K.

    2000-01-01

    The worst case timing margin of a synchronous design implemented with a field-programmable gate array (FPGA) is easy to perform using available FPGA design tools. However, it may be difficult to impossible to verify that worst case timing requirements are met for complex asynchronous logic design.

  11. Hierarchical trigger of the ALICE calorimeters

    NASA Astrophysics Data System (ADS)

    Muller, Hans; Awes, Terry C.; Novitzky, Norbert; Kral, Jiri; Rak, Jan; Schambach, Jo; Wang, Yaping; Wang, Dong; Zhou, Daicui

    2010-05-01

    The trigger of the ALICE electromagnetic calorimeters is implemented in 2 hierarchically connected layers of electronics. In the lower layer, level-0 algorithms search shower energy above threshold in locally confined Trigger Region Units (TRU). The top layer is implemented as a single, global trigger unit that receives the trigger data from all TRUs as input to the level-1 algorithm. This architecture was first developed for the PHOS high pT photon trigger before it was adopted by EMCal also for the jet trigger. TRU units digitize up to 112 analogue input signals from the Front End Electronics (FEE) and concentrate their digital stream in a single FPGA. A charge and time summing algorithm is combined with a peakfinder that suppresses spurious noise and is precise to single LHC bunches. With a peak-to-peak noise level of 150 MeV the linear dynamic range above threshold spans from MIP energies at 215 up to 50 GeV. Local level-0 decisions take less than 600 ns after LHC collisions, upon which all TRUs transfer their level-0 trigger data to the upstream global trigger module which searches within the remaining level-1 latency for high pT gamma showers (PHOS) and/or for Jet cone areas (EMCaL).

  12. L1Track: A fast Level 1 track trigger for the ATLAS high luminosity upgrade

    NASA Astrophysics Data System (ADS)

    Cerri, Alessandro

    2016-07-01

    With the planned high-luminosity upgrade of the LHC (HL-LHC), the ATLAS detector will see its collision rate increase by approximately a factor of 5 with respect to the current LHC operation. The earliest hardware-based ATLAS trigger stage ("Level 1") will have to provide a higher rejection factor in a more difficult environment: a new improved Level 1 trigger architecture is under study, which includes the possibility of extracting with low latency and high accuracy tracking information in time for the decision taking process. In this context, the feasibility of potential approaches aimed at providing low-latency high-quality tracking at Level 1 is discussed.

  13. Level 3 trigger algorithm and Hardware Platform for the HADES experiment

    NASA Astrophysics Data System (ADS)

    Kirschner, Daniel Georg; Agakishiev, Geydar; Liu, Ming; Perez, Tiago; Kühn, Wolfgang; Pechenov, Vladimir; Spataro, Stefano

    2009-01-01

    A next generation real time trigger method to improve the enrichment of lepton events in the High Acceptance DiElectron Spectrometer (HADES) trigger system has been developed. In addition, a flexible Hardware Platform (Gigabit Ethernet-Multi-Node, GE-MN) was developed to implement and test the trigger method. The trigger method correlates the ring information of the HADES Ring Imaging Cherenkov (RICH) detector with the fired wires (drift cells) of the HADES Mini Drift Chamber (MDC) detector. It is demonstrated that this Level 3 trigger method can enhance the number of events which contain leptons by a factor of up to 50 at efficiencies above 80%. The performance of the correlation method in terms of the events analyzed per second has been studied with the GE-MN prototype in a lab test setup by streaming previously recorded experiment data to the module. This paper is a compilation from Kirschner [Level 3 trigger algorithm and Hardware Platform for the HADES experiment, Ph.D. Thesis, II. Physikalisches Institut der Justus-Liebig-Universität Gießen, urn:nbn:de:hebis:26-opus-50784, October 2007 [1

  14. Data flow analysis of a highly parallel processor for a level 1 pixel trigger

    SciTech Connect

    Cancelo, G.; Gottschalk, Erik Edward; Pavlicek, V.; Wang, M.; Wu, J.

    2003-01-01

    The present work describes the architecture and data flow analysis of a highly parallel processor for the Level 1 Pixel Trigger for the BTeV experiment at Fermilab. First the Level 1 Trigger system is described. Then the major components are analyzed by resorting to mathematical modeling. Also, behavioral simulations are used to confirm the models. Results from modeling and simulations are fed back into the system in order to improve the architecture, eliminate bottlenecks, allocate sufficient buffering between processes and obtain other important design parameters. An interesting feature of the current analysis is that the models can be extended to a large class of architectures and parallel systems.

  15. Commissioning of the ATLAS High Level Trigger with single beam and cosmic rays

    NASA Astrophysics Data System (ADS)

    Di Mattia, A.; ATLAS Collaboration

    2010-04-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 1034 cm-2s-1 it will need to achieve a rejection factor of the order of 10-7 against random proton-proton interactions, while selecting with high efficiency events that are needed for physics analyses. After a first processing level using custom electronics based on FPGAs and ASICs, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a "stress test" of the system and some initial calibration data. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. After giving an overview of the trigger design and its innovative features, this paper focuses on the experience gained from operating the ATLAS trigger with single LHC beams and cosmic-rays.

  16. Design and Implementation of the New D0 Level-1 Calorimeter Trigger

    SciTech Connect

    Abolins, M.; Adams, M.; Adams, T.; Aguilo, E.; Anderson, J.; Bagby, L.; Ban, J.; Barberis, E.; Beale, S.; Benitez, J.; Biehl, J.; /Columbia U. /DAPNIA, Saclay /Delhi U. /Fermilab /Florida State U. /Indiana U. /Michigan State U. /Northeastern U. /Rice U. /Southern Methodist U. /University Coll., Dublin

    2007-09-01

    Increasing luminosity at the Fermilab Tevatron collider has led the D0 collaboration to make improvements to its detector beyond those already in place for Run IIa, which began in March 2001. One of the cornerstones of this Run IIb upgrade is a completely redesigned level-1 calorimeter trigger system. The new system employs novel architecture and algorithms to retain high efficiency for interesting events while substantially increasing rejection of background. We describe the design and implementation of the new level-1 calorimeter trigger hardware and discuss its performance during Run IIb data taking. In addition to strengthening the physics capabilities of D0, this trigger system will provide valuable insight into the operation of analogous devices to be used at LHC experiments.

  17. A FPGA Implementation of JPEG Baseline Encoder for Wearable Devices

    PubMed Central

    Li, Yuecheng; Jia, Wenyan; Luan, Bo; Mao, Zhi-hong; Zhang, Hong; Sun, Mingui

    2015-01-01

    In this paper, an efficient field-programmable gate array (FPGA) implementation of the JPEG baseline image compression encoder is presented for wearable devices in health and wellness applications. In order to gain flexibility in developing FPGA-specific software and balance between real-time performance and resources utilization, A High Level Synthesis (HLS) tool is utilized in our system design. An optimized dataflow configuration with a padding scheme simplifies the timing control for data transfer. Our experiments with a system-on-chip multi-sensor system have verified our FPGA implementation with respect to real-time performance, computational efficiency, and FPGA resource utilization. PMID:26190911

  18. The BaBar Level 1 Drift-Chamber Trigger Upgrade With 3D Tracking

    SciTech Connect

    Chai, X.D.; /Iowa U.

    2005-11-29

    At BABAR, the Level 1 Drift Chamber trigger is being upgraded to reduce increasing background rates while the PEP-II luminosity keeps improving. This upgrade uses the drift time information and stereo wires in the drift chamber to perform a 3D track reconstruction that effectively rejects background events spread out along the beam line.

  19. 78 FR 37516 - WTO Agricultural Quantity-Based Safeguard Trigger Levels

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... of Agriculture in Presidential Proclamation No. 6763, dated December 23, 1994, 60 FR 1005 (Jan. 4... Safeguard Trigger Levels, published in the Federal Register at 60 FR 427 (Jan. 4, 1995). Notice: As provided... Safeguard Action published in the Federal Register, at 60 FR 427 (Jan. 4, 1995). Issued at Washington,...

  20. WATER LEVEL DRAWDOWN TRIGGERS SYSTEM-WIDE BUBBLE RELEASE FROM RESERVOIR SEDIMENTS

    EPA Science Inventory

    Reservoirs are an important anthropogenic source of methane and ebullition is a key pathway by which methane stored in reservoir sediments can be released to the atmosphere. Changes in hydrostatic pressure during periods of falling water levels can trigger bubbling events, sugge...

  1. The First Level Trigger of JEM-EUSO: Concept and tests

    NASA Astrophysics Data System (ADS)

    Bertaina, M.; Caruso, R.; Catalano, O.; Contino, G.; Fenu, F.; Mignone, M.; Mulas, R.

    2016-07-01

    The trigger system of JEM-EUSO is designed to meet specific challenging requirements. These include managing a large number of pixels (~3·105) and using a very fast, low power consuming, and radiation hard electronics. It must achieve a high signal-to-noise performance and flexibility and cope with the limited down-link transmission rate from the International Space Station (ISS) to Earth. The general overview of the First Level Trigger for cosmic ray detection is reviewed; tests that validate its performance are discussed.

  2. Radiation Tolerant Antifuse FPGA

    NASA Technical Reports Server (NTRS)

    Wang, Jih-Jong; Cronquist, Brian; McCollum, John; Parker, Wanida; Katz, Rich; Kleyner, Igor; Day, John H. (Technical Monitor)

    2002-01-01

    The total dose performance of the antifuse FPGA for space applications is summarized. Optimization of the radiation tolerance in the fabless model is the main theme. Mechanisms to explain the variation in different products are discussed.

  3. Public Key FPGA Software

    Energy Science and Technology Software Center (ESTSC)

    2013-07-25

    The Public Key (PK) FPGA software performs asymmetric authentication using the 163-bit Elliptic Curve Digital Signature Algorithm (ECDSA) on an embedded FPGA platform. A digital signature is created on user-supplied data, and communication with a host system is performed via a Serial Peripheral Interface (SPI) bus. Software includes all components necessary for signing, including custom random number generator for key creation and SHA-256 for data hashing.

  4. The architecture of the CMS Level-1 Trigger Control and Monitoring System using UML

    NASA Astrophysics Data System (ADS)

    Magrans de Abril, Marc; Da Rocha Melo, Jose L.; Ghabrous Larrea, Carlos; Hammer, Josef; Hartl, Christian; Lazaridis, Christos

    2011-12-01

    The architecture of the Compact Muon Solenoid (CMS) Level-1 Trigger Control and Monitoring software system is presented. This system has been installed and commissioned on the trigger online computers and is currently used for data taking. It has been designed to handle the trigger configuration and monitoring during data taking as well as all communications with the main run control of CMS. Furthermore its design has foreseen the provision of the software infrastructure for detailed testing of the trigger system during beam down time. This is a medium-size distributed system that runs over 40 PCs and 200 processes that control about 4000 electronic boards. The architecture of this system is described using the industry-standard Universal Modeling Language (UML). This way the relationships between the different subcomponents of the system become clear and all software upgrades and modifications are simplified. The described architecture has allowed for frequent upgrades that were necessary during the commissioning phase of CMS when the trigger system evolved constantly. As a secondary objective, the paper provides a UML usage example and tries to encourage the standardization of the software documentation of large projects across the LHC and High Energy Physics community.

  5. Online measurement of LHC beam parameters with the ATLAS High Level Trigger

    NASA Astrophysics Data System (ADS)

    Strauss, E.

    2012-06-01

    We present an online measurement of the LHC beamspot parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise, up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beamspot values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual bunch crossings have allowed for studies of single-bunch distributions as well as the behavior of bunch trains. This talk will cover the constraints imposed by the online environment and describe how these measurements are accomplished with the given resources. The algorithm tasks must be completed within the time constraints of the Level 2 trigger, with limited CPU and bandwidth allocations. This places an emphasis on efficient algorithm design and the minimization of data requests.

  6. ADC and TDC implemented using FPGA

    SciTech Connect

    Wu, Jinyuan; Hansen, Sten; Shi, Zonghan; /Fermilab

    2007-11-01

    Several tests of FPGA devices programmed as analog waveform digitizers are discussed. The ADC uses the ramping-comparing scheme. A multi-channel ADC can be implemented with only a few resistors and capacitors as external components. A periodic logic levels are shaped by passive RC network to generate exponential ramps. The FPGA differential input buffers are used as comparators to compare the ramps with the input signals. The times at which these ramps cross the input signals are digitized by time-to-digital-converters (TDCs) implemented within the FPGA. The TDC portion of the logic alone has potentially a broad range of HEP/nuclear science applications. A 96-channel TDC card using FPGAs as TDCs being designed for the Fermilab MIPP electronics upgrade project is discussed. A deserializer circuit based on multisampling circuit used in the TDC, the 'Digital Phase Follower' (DPF) is also documented.

  7. Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

    NASA Astrophysics Data System (ADS)

    Frank, M.; Gaspar, C.; Herwijnen, E. v.; Jost, B.; Neufeld, N.

    2014-06-01

    The LHCb experiment at the LHC accelerator at CERN collects collisions of particle bunches at 40 MHz. After a first level of hardware trigger with output of 1 MHz, the physically interesting collisions are selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. This farm consists of up to roughly 25000 CPU cores in roughly 1600 physical nodes each equipped with at least 1 TB of local storage space. This work describes the architecture to treble the available CPU power of the HLT farm given that the LHC collider in previous years delivered stable physics beams about 30% of the time. The gain is achieved by splitting the event selection process in two, a first stage reducing the data taken during stable beams and buffering the preselected particle collisions locally. A second processing stage running constantly at lower priority will then finalize the event filtering process and benefits fully from the time when LHC does not deliver stable beams e.g. while preparing a new physics fill or during periods used for machine development.

  8. Drought-trigger ground-water levels and analysis of historical water-level trends in Chester County, Pennsylvania

    USGS Publications Warehouse

    Schreffler, Curtis L.

    1996-01-01

    The Chester County observation-well network was established in 1973 through a cooperative agreement between the Chester County Water Resources Authority (CCWRA) and the U.S. Geological Survey. The network was established to monitor local ground-water levels, to determine drought conditions, and to monitor ground-water-level trends. Drought-warning and drought-emergency water-level triggers were determined for 20 of the 23 wells in the Chester County observation-well network. A statistical test to determine either rising or declining water-level trends was performed on data for all wells in the network. Water-level data from both of these wells showed a rising trend. A decrease in ground-water pumping in the area near these wells was probably the reason for the rise in water levels.

  9. Integrated upstream parasitic event building architecture for BTeV level 1 pixel trigger system

    SciTech Connect

    Wu, Jin-Yuan; Wang, M.; Gottschalk, E.; Christian, D.; Li, X.; Shi, Z.; Pavlicek, V.; Cancelo, G.; /Fermilab

    2006-03-01

    Contemporary event building approaches use data switches, either homemade or commercial off-the-shelf ones, to merge data from different channels and distribute them among processor nodes. However, in many trigger and DAQ systems, the merging and distributing functions can often be performed in pre-processing stages. By carefully integrating these functions into the upstream pre-processing stages, the events can be built without dedicated switches. In addition to the cost reducing, extra benefits are gain when the event is built early upstream. In this document, an example of the integrated upstream parasitic event building architecture that has been studied for the BTeV level 1 pixel trigger system is described. Several design considerations that experimentalists of other projects might be interested in are also discussed.

  10. Electrons and photons at High Level Trigger in CMS for Run II

    NASA Astrophysics Data System (ADS)

    Anuar, Afiq A.

    2015-12-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increase in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. New approaches have been studied to keep the HLT output rate manageable while maintaining thresholds low enough to cover physics analyses. The strategy mainly relies on porting online the ingredients that have been successfully applied in the offline reconstruction, thus allowing to move HLT selection closer to offline cuts. Improvements in HLT electron and photon definitions will be presented, focusing in particular on: updated clustering algorithm and the energy calibration procedure, new Particle-Flow-based isolation approach and pileup mitigation techniques, and the electron-dedicated track fitting algorithm based on Gaussian Sum Filter.

  11. Can Increased CO2 Levels Trigger a Runaway Greenhouse on the Earth?

    NASA Astrophysics Data System (ADS)

    Ramirez, R.

    2014-04-01

    Recent one-dimensional (globally averaged) climate model calculations suggest that increased atmospheric CO2 could conceivably trigger a runaway greenhouse if CO2 concentrations were approximately 100 times higher than today. The new prediction runs contrary to previous calculations, which indicated that CO2 increases could not trigger a runaway, even at Venus-like CO2 concentrations. Goldblatt et al. argue that this different behavior is a consequence of updated absorption coefficients for H2O that make a runaway more likely. Here, we use a 1-D cloud-free climate model with similar, up-to-date absorption coefficients, but with a self-consistent methodology, to demonstrate that CO2 increases cannot induce a runaway greenhouse on the modern Earth. However, these initial calculations do not include cloud feedback, which may be positive at higher temperatures, destabilizing Earth's climate. We then show new calculations demonstrating that cirrus clouds cannot trigger a runaway, even in the complete absence of low clouds. Thus, the habitability of an Earth-like planet at Earth's distance appears to be ensured, irrespective of the sign of cloud feedback. Our results are of importance to Earth-like planets that receive similar insolation levels as does the Earth and to the ongoing question about cloud response at higher temperatures.

  12. Upgrade of the trigger system of CMS

    NASA Astrophysics Data System (ADS)

    Jeitler, Manfred; CMS Collaboration

    2013-08-01

    Various parts of the CMS trigger and in particular the Level-1 hardware trigger will be upgraded to cope with increasing luminosity, using more selective trigger conditions at Level 1 and improving the reliability of the system. Many trigger subsystems use FPGAs (Field Programmable Gate Arrays) in the electronics and will benefit from developments in this technology, allowing us to place much more logic into a single FPGA chip, thus reducing the number of chips, electronic boards and interconnections and in this way improving reliability. A number of subsystems plan to switch from the old VME bus to the new microTCA crate standard. Using similar approaches, identical modules and common software wherever possible will reduce costs and manpower requirements and improve the serviceability of the whole trigger system. The computer-farm based High-Level Trigger will not only be extended by using increasing numbers of more powerful PCs but there are also concepts for making it more robust and the software easier to maintain, which will result in better efficiency of the whole system.

  13. Drought-Trigger Ground-Water Levels in Chester County, Pennsylvania, for the Period of Record Ending May 2006

    USGS Publications Warehouse

    Cinotto, Peter J.

    2007-01-01

    This report presents the results of a study by the U.S. Geological Survey (USGS), in cooperation with the Chester County Water Resources Authority (CCWRA), to update the drought-trigger water levels for the Chester County observation-well network. The Chester County observation-well network was established in 1973 through a cooperative agreement between the CCWRA and the USGS to monitor local ground-water levels and trends and to determine drought conditions. In 1990 and again in 1997, drought-warning and drought-emergency water-level triggers were determined for the majority of wells in the existing Chester County observation-well network of 23 wells. Since 1997, the Chester County observation-well network expanded to 29 wells, some of the original wells were destroyed, and additional monthly water-level observations were made to allow for better statistical relations. Because of these changes, new statistics for water-level triggers were required. For this study, 19 of the 29 wells in the observation-well network were used to compute drought-trigger water levels. An additional 'drought-watch water-level trigger' category was developed to make the Chester County drought-trigger water-level categories consistent with those implemented by the Pennsylvania Department of Environmental Protection (PaDEP). The three drought-trigger water-level categories, as defined by PaDEP are 1) 'drought watch' when at the 75th-percentile level; 2) 'drought warning' when at the 90th-percentile level; and 3) 'drought emergency' when at the 95th-percentile level. A revised methodology, resulting from longer periods of record representing ground-water and climatic conditions and changes in local water use, has resulted in some observed differences in drought-trigger water levels. A comparison of current drought-trigger water levels to those calculated in 1997 shows the largest mean annual change in percentile values was in northeastern Chester County. In this northeastern region, the

  14. Estimating the extent of stress influence by using earthquake triggering groundwater level variations in Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Shih-Jung; Hsu, Kuo-Chin; Lai, Wen-Chi; Wang, Chein-Lee

    2015-11-01

    Groundwater level variations associated with earthquake events may reveal useful information. This study estimates the extent of stress influence, defined as the distance over which an earthquake can induce a step change of the groundwater level, using earthquake-triggering groundwater level variations in Taiwan. Groundwater variations were first characterized based on the dynamics of groundwater level changes dominantly triggered by earthquakes. The step-change data in co-seismic groundwater level variations were used to analyze the extent of stress influence for earthquakes. From the data analysis, the maximum extent of stress influence is 250 km around Taiwan. A two-dimensional approach was adopted to develop two models for estimating the maximum extent of stress influence for earthquakes. From the developed models, the extent of stress influence is proportional to the earthquake magnitude and inversely proportional to the groundwater level change. The model equations can be used to calculate the influence radius of stress from an earthquake by using the observed change of groundwater level and the earthquake magnitude. The models were applied to estimate the area of anomalous stress, defined as the possible areas where the strain energy is accumulated, using the cross areas method. The results show that the estimated area of anomalous stress is close to the epicenter. Complex geological structures and material heterogeneity and anisotropy may explain this disagreement. More data collection and model refinements can improve the proposed model. This study shows the potential of using groundwater level variations for capturing seismic information. The proposed concept of extent of stress influence can be used to estimate the earthquake effect in hydraulic engineering, mining engineering, and carbon dioxide sequestration, etc. This study provides a concept for estimating the possible areas of anomalous stress for a forthcoming earthquake.

  15. A binary link tracker for the BaBar level 1 trigger system

    SciTech Connect

    Berenyi, A.; Chen, H.K.; Dao, K.

    1999-08-01

    The BaBar detector at PEP-II will operate in a high-luminosity e{sup +}e{sup {minus}} collider environment near the {Upsilon}(4S) resonance with the primary goal of studying CP violation in the B meson system. In this environment, typical physics events of interest involve multiple charged particles. These events are identified by counting these tracks in a fast first level (Level 1) trigger system, by reconstructing the tracks in real time. For this purpose, a Binary Link Tracker Module (BLTM) was designed and fabricated for the BaBar Level 1 Drift Chamber trigger system. The BLTM is responsible for linking track segments, constructed by the Track Segment Finder Modules (TSFM), into complete tracks. A single BLTM module processes a 360 MBytes/s stream of segment hit data, corresponding to information from the entire Drift Chamber, and implements a fast and robust algorithm that tolerates high hit occupancies as well as local inefficiencies of the Drift Chamber. The algorithms and the necessary control logic of the BLTM were implemented in Field Programmable Gate Arrays (FPGAs), using the VHDL hardware description language. The finished 9U x 400 mm Euro-format board contains roughly 75,000 gates of programmable logic or about 10,000 lines of VHDL code synthesized into five FPGAs.

  16. Readout, first- and second-level triggers of the new Belle silicon vertex detector

    NASA Astrophysics Data System (ADS)

    Friedl, M.; Abe, R.; Abe, T.; Aihara, H.; Asano, Y.; Aso, T.; Bakich, A.; Browder, T.; Chang, M. C.; Chao, Y.; Chen, K. F.; Chidzik, S.; Dalseno, J.; Dowd, R.; Dragic, J.; Everton, C. W.; Fernholz, R.; Fujii, H.; Gao, Z. W.; Gordon, A.; Guo, Y. N.; Haba, J.; Hara, K.; Hara, T.; Harada, Y.; Haruyama, T.; Hasuko, K.; Hayashi, K.; Hazumi, M.; Heenan, E. M.; Higuchi, T.; Hirai, H.; Hitomi, N.; Igarashi, A.; Igarashi, Y.; Ikeda, H.; Ishino, H.; Itoh, K.; Iwaida, S.; Kaneko, J.; Kapusta, P.; Karawatzki, R.; Kasami, K.; Kawai, H.; Kawasaki, T.; Kibayashi, A.; Koike, S.; Korpar, S.; Križan, P.; Kurashiro, H.; Kusaka, A.; Lesiak, T.; Limosani, A.; Lin, W. C.; Marlow, D.; Matsumoto, H.; Mikami, Y.; Miyake, H.; Moloney, G. R.; Mori, T.; Nakadaira, T.; Nakano, Y.; Natkaniec, Z.; Nozaki, S.; Ohkubo, R.; Ohno, F.; Okuno, S.; Onuki, Y.; Ostrowicz, W.; Ozaki, H.; Peak, L.; Pernicka, M.; Rosen, M.; Rozanska, M.; Sato, N.; Schmid, S.; Shibata, T.; Stamen, R.; Stanič, S.; Steininger, H.; Sumisawa, K.; Suzuki, J.; Tajima, H.; Tajima, O.; Takahashi, K.; Takasaki, F.; Tamura, N.; Tanaka, M.; Taylor, G. N.; Terazaki, H.; Tomura, T.; Trabelsi, K.; Trischuk, W.; Tsuboyama, T.; Uchida, K.; Ueno, K.; Ueno, K.; Uozaki, N.; Ushiroda, Y.; Vahsen, S.; Varner, G.; Varvell, K.; Velikzhanin, Y. S.; Wang, C. C.; Wang, M. Z.; Watanabe, M.; Watanabe, Y.; Yamada, Y.; Yamamoto, H.; Yamashita, Y.; Yamashita, Y.; Yamauchi, M.; Yanai, H.; Yang, R.; Yasu, Y.; Yokoyama, M.; Ziegler, T.; Žontar, D.

    2004-12-01

    A major upgrade of the Silicon Vertex Detector (SVD 2.0) of the Belle experiment at the KEKB factory was installed along with new front-end and back-end electronics systems during the summer shutdown period in 2003 to cope with higher particle rates, improve the track resolution and meet the increasing requirements of radiation tolerance. The SVD 2.0 detector modules are read out by VA1TA chips which provide "fast or" (hit) signals that are combined by the back-end FADCTF modules to coarse, but immediate level 0 track trigger signals at rates of several tens of a kHz. Moreover, the digitized detector signals are compared to threshold lookup tables in the FADCTFs to pass on hit information on a single strip basis to the subsequent level 1.5 trigger system, which reduces the rate below the kHz range. Both FADCTF and level 1.5 electronics make use of parallel real-time processing in Field Programmable Gate Arrays (FPGAs), while further data acquisition and event building is done by PC farms running Linux. The new readout system hardware is described and the first results obtained with cosmics are shown.

  17. Prototype of a File-Based High-Level Trigger in CMS

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Coarasa, J. A.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Ozga, W.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Spataru, Ac; Stieger, B.; Sumorok, K.; Veverka, J.; Wakefiled, C. C.; Zejdl, P.

    2014-06-01

    The DAQ system of the CMS experiment at the LHC is upgraded during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ~1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ~50 builder units (BUs). Each BU writes the raw events at ~2GB/s to a local file system shared with Q(10) filter-unit machines (FUs) running the HLT code. The FUs read the raw data from the file system, select Q(1%) of the events, and write the selected events together with monitoring meta-data back to a disk. This data is then aggregated over several steps and made available for offline reconstruction and online monitoring. We present the challenges, technical choices, and performance figures from the prototyping phase. In addition, the steps to the final system implementation will be discussed.

  18. FPGA based control system for space instrumentation

    NASA Astrophysics Data System (ADS)

    Di Giorgio, Anna M.; Cerulli Irelli, Pasquale; Nuzzolo, Francesco; Orfei, Renato; Spinoglio, Luigi; Liu, Giovanni S.; Saraceno, Paolo

    2008-07-01

    The prototype for a general purpose FPGA based control system for space instrumentation is presented, with particular attention to the instrument control application software. The system HW is based on the LEON3FT processor, which gives the flexibility to configure the chip with only the necessary HW functionalities, from simple logic up to small dedicated processors. The instrument control SW is developed in ANSI C and for time critical (<10μs) commanding sequences implements an internal instructions sequencer, triggered via an interrupt service routine based on a HW high priority interrupt.

  19. Development of High Level Trigger Software for Belle II at SuperKEKB

    NASA Astrophysics Data System (ADS)

    Lee, S.; Itoh, R.; Katayama, N.; Mineo, S.

    2011-12-01

    The Belle collaboration has been trying for 10 years to reveal the mystery of the current matter-dominated universe. However, much more statistics is required to search for New Physics through quantum loops in decays of B mesons. In order to increase the experimental sensitivity, the next generation B-factory, SuperKEKB, is planned. The design luminosity of SuperKEKB is 8 x 1035cm-2s-1 a factor 40 above KEKB's peak luminosity. At this high luminosity, the level 1 trigger of the Belle II experiment will stream events of 300 kB size at a 30 kHz rate. To reduce the data flow to a manageable level, a high-level trigger (HLT) is needed, which will be implemented using the full offline reconstruction on a large scale PC farm. There, physics level event selection is performed, reducing the event rate by ~ 10 to a few kHz. To execute the reconstruction the HLT uses the offline event processing framework basf2, which has parallel processing capabilities used for multi-core processing and PC clusters. The event data handling in the HLT is totally object oriented utilizing ROOT I/O with a new method of object passing over the UNIX socket connection. Also under consideration is the use of the HLT output as well to reduce the pixel detector event size by only saving hits associated with a track, resulting in an additional data reduction of ~ 100 for the pixel detector. In this contribution, the design and implementation of the Belle II HLT are presented together with a report of preliminary testing results.

  20. FPGA Vision Data Architecture

    NASA Technical Reports Server (NTRS)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  1. Results from the first p+p runs of the ALICE High Level Trigger at LHC

    NASA Astrophysics Data System (ADS)

    Kanaki, Kalliopi; ALICE HLT Collaboration

    2011-12-01

    The High Level Trigger for the ALICE experiment at LHC is a powerful, sophisticated tool aimed at compressing the raw data volume and issuing selective triggers for events with desirable physics content. At its current state it integrates information from all major ALICE detectors, i. e. the inner tracking system, the time projection chamber, the electromagnetic calorimeters, the transition radiation detector and the muon spectrometer performing real-time event reconstruction. The steam engine behind HLT is a high performance computing cluster of several hundred nodes. It has to reduce the data rate from 25 GB/s to 1.25 GB/s for fitting the DAQ mass storage bandwidth. The cluster is served by a full GigaBit Ethernet network, in addition to an InfiniBand backbone network. To cope with the great challenge of Pb+Pb collisions in autumn 2010, its performance capabilities are being enhanced with the addition of new nodes. Towards the same end the first GPU co-processors are in place. During the first period of data taking with p+p collisions the HLT was extensively used to reconstruct, analyze and display data from the various participating detectors. Among other tasks it contributed to the monitoring of the detector performance, selected events for their calibration and efficiency studies, and estimated primary and secondary vertices from p+p collisions identifying V0 topologies. The experience gained during these first months of online operation will be presented.

  2. Ground-level observation of a terrestrial gamma ray flash initiated by a triggered lightning

    NASA Astrophysics Data System (ADS)

    Hare, B. M.; Uman, M. A.; Dwyer, J. R.; Jordan, D. M.; Biggerstaff, M. I.; Caicedo, J. A.; Carvalho, F. L.; Wilkes, R. A.; Kotovsky, D. A.; Gamerota, W. R.; Pilkey, J. T.; Ngin, T. K.; Moore, R. C.; Rassoul, H. K.; Cummer, S. A.; Grove, J. E.; Nag, A.; Betten, D. P.; Bozarth, A.

    2016-06-01

    We report on a terrestrial gamma ray flash (TGF) that occurred on 15 August 2014 coincident with an altitude-triggered lightning at the International Center for Lightning Research and Testing (ICLRT) in North Central Florida. The TGF was observed by a ground-level network of gamma ray, close electric field, distant magnetic field, Lightning Mapping Array (LMA), optical, and radar measurements. Simultaneous gamma ray and LMA data indicate that the upward positive leader of the triggered lightning flash induced relativistic runaway electron avalanches when the leader tip was at about 3.5 km altitude, resulting in the observed TGF. Channel luminosity and electric field data show that there was an initial continuous current (ICC) pulse in the lightning channel to ground during the time of the TGF. Modeling of the observed ICC pulse electric fields measured at close range (100-200 m) indicates that the ICC pulse current had both a slow and fast component (full widths at half maximum of 235 μs and 59 μs) and that the fast component was more or less coincident with the TGF, suggesting a physical association between the relativistic runaway electron avalanches and the ICC pulse observed at ground. Our ICC pulse model reproduces moderately well the measured close electric fields at the ICLRT as well as three independent magnetic field measurements made about 250 km away. Radar and LMA data suggest that there was negative charge near the region in which the TGF was initiated.

  3. The twilight zone: ambient light levels trigger activity in primitive ants

    PubMed Central

    Narendra, Ajay; Reid, Samuel F.; Hemmi, Jan M.

    2010-01-01

    Many animals become active during twilight, a narrow time window where the properties of the visual environment are dramatically different from both day and night. Despite the fact that many animals including mammals, reptiles, birds and insects become active in this specific temporal niche, we do not know what cues trigger this activity. To identify the onset of specific temporal niches, animals could anticipate the timing of regular events or directly measure environmental variables. We show that the Australian bull ant, Myrmecia pyriformis, starts foraging only during evening twilight throughout the year. The onset occurs neither at a specific temperature nor at a specific time relative to sunset, but at a specific ambient light intensity. Foraging onset occurs later when light intensities at sunset are brighter than normal or earlier when light intensities at sunset are darker than normal. By modifying ambient light intensity experimentally, we provide clear evidence that ants indeed measure light levels and do not rely on an internal rhythm to begin foraging. We suggest that the reason for restricting the foraging onset to twilight and measuring light intensity to trigger activity is to optimize the trade-off between predation risk and ease of navigation. PMID:20129978

  4. Performance of the Level-1 Muon Trigger for the CMS Endcap Muon System with Cosmic Rays and First LHC Beams

    NASA Astrophysics Data System (ADS)

    Gartner, Joseph

    2008-10-01

    We report on the performance of the level-1 muon trigger for the cathode strip chambers (CSCs) comprising the endcaps of the Compact Muon Solenoid (CMS) experiment. CMS is a general-purpose experiment designed to capitalize on the rich physics program of the Large Hadron Collider (LHC), which begins operation this autumn and which opens a window onto physics at the TeV energy scale. After many years of preparation, the CMS detectors and electronics have undergone a series of commissioning exercises involving the triggering and data acquisition of signals induced from cosmic ray muons, and most recently, first LHC beams. Here we report on the successful synchronization of signals from the 468 CSCs in the level-1 trigger path, and the successful triggering of the experiment based on those signals. The triggers that are provided by a specially built set of ``Track-Finder'' processors include triggers based on single CSC segments, tracks based on a coincidence of segments along a predefined road emanating from the beam collision point, and tracks parallel to the beam line that accept accelerator-induced halo muons. Evidence of the proper functioning of these triggers will be reported.

  5. TOTEM Trigger System Firmware

    NASA Astrophysics Data System (ADS)

    Kopal, Josef

    2014-06-01

    This paper describes the TOTEM Trigger System Firmware that is operational at LHC since 2009. The TOTEM experiment is devoted to the forward hadronic physics at collision energy from 2.7 to 14TeV. It is composed of three different subdetectors that are placed at 9, 13.5, and 220m from the Interaction Point 5. A time-critical-logic firmware is implemented inside FPGA circuits to review collisions and to select the relevant ones to be stored by the Data Acquisition (DAQ). The Trigger system has been modified in the 2012-2013 LHC runs allowing the experiment to take data in cooperation with CMS.

  6. FPGA control utility in JAVA

    NASA Astrophysics Data System (ADS)

    Drabik, Paweł; Pozniak, Krzysztof T.

    2008-01-01

    Processing of large amount of data for high energy physics experiments is modeled here in a form of a multichannel, distributed measurement system based on photonic and electrical modules. A method to control such a system is presented in this paper. This method is based on a new method of address space management called the Component Internal Interface (CII). An updatable and configurable environment provided by FPGA fulfills technological and functional demands imposed on complex measurement systems of the considered kind. A purpose, design process and realization of the object oriented software application, written in the high level code described. A few examples of usage of the suggested application is presented. The application is intended for usage in HEP experiments and FLASH, XFEL lasers.

  7. A highly selective first-level muon trigger with MDT chamber data for ATLAS at HL-LHC

    NASA Astrophysics Data System (ADS)

    Nowak, S.; Kroha, H.

    2016-07-01

    Highly selective triggers are essential for the physics programme of the ATLAS experiment at HL-LHC where the instantaneous luminosity will be about an order of magnitude larger than the LHC instantaneous luminosity in Run 1. The first level muon trigger rate is dominated by low momentum muons below the nominal trigger threshold due to the moderate momentum resolution of the Resistive Plate and Thin Gap trigger chambers. The resulting high trigger rates at HL-LHC can be sufficiently reduced by using the data of the precision Muon Drift Tube chambers for the trigger decision. This requires the implementation of a fast MDT read-out chain and of a fast MDT track reconstruction algorithm with a latency of at most 6 μs. A hardware demonstrator of the fast read-out chain has been successfully tested at the HL-LHC operating conditions at the CERN Gamma Irradiation Facility. The fast track reconstruction algorithm has been implemented on a fast trigger processor.

  8. A Level-1 Tracking Trigger for the CMS upgrade using stacked silicon strip detectors and advanced pattern technologies

    NASA Astrophysics Data System (ADS)

    Boudoul, G.

    2013-01-01

    Experience at high luminosity hadrons collider experiments shows that tracking information enhances the trigger rejection capabilities while retaining high efficiency for interesting physics events. The design of a tracking based trigger for the High Luminosity LHC (HL-LHC) is an extremely challenging task, and requires the identification of high-momentum particle tracks as a part of the Level 1 Trigger. Simulation studies show that this can be achieved by correlating hits on two closely spaced silicon strip sensors, and reconstructing tracks at L1 by employing an Associative Memory approach. The progresses on the design and development of this micro-strip stacked prototype modules and the performance of few prototype detectors will be presented. Preliminary results of a simulated tracker layout equipped with stacked modules are discussed in terms of pT resolution and triggering capabilities. Finally, a discussion on the L1 architecture will be given.

  9. Using FPGA Devices to Accelerate Biomolecular Simulations

    SciTech Connect

    Alam, Sadaf R; Agarwal, Pratul K; Smith, Melissa C; Vetter, Jeffrey S; Caliga, David E

    2007-03-01

    A field-programmable gate array implementation of the particle-mesh Ewald a molecular dynamics simulation method reduces the microprocessor time-to-solution by a factor of three while using only high-level languages. The application speedup on FPGA devices increases with the problem size. The authors use a performance model to analyze the potential of simulating large-scale biological systems faster than many cluster-based supercomputing platforms.

  10. Commodity multi-processor systems in the ATLAS level-2 trigger

    SciTech Connect

    Abolins, M.; Blair, R.; Bock, R.; Bogaerts, A.; Dawson, J.; Ermoline, Y.; Hauser, R.; Kugel, A.; Lay, R.; Muller, M.; Noffz, K.-H.; Pope, B.; Schlereth, J.; Werner, P.

    2000-05-23

    Low cost SMP (Symmetric Multi-Processor) systems provide substantial CPU and I/O capacity. These features together with the ease of system integration make them an attractive and cost effective solution for a number of real-time applications in event selection. In ATLAS the authors consider them as intelligent input buffers (active ROB complex), as event flow supervisors or as powerful processing nodes. Measurements of the performance of one off-the-shelf commercial 4-processor PC with two PCI buses, equipped with commercial FPGA based data source cards (microEnable) and running commercial software are presented and mapped on such applications together with a long-term program of work. The SMP systems may be considered as an important building block in future data acquisition systems.

  11. Flexible event reconstruction software chains with the ALICE High-Level Trigger

    NASA Astrophysics Data System (ADS)

    Ram, D.; Breitner, T.; Szostak, A.

    2012-12-01

    The ALICE High-Level Trigger (HLT) has a large high-performance computing cluster at CERN whose main objective is to perform real-time analysis on the data generated by the ALICE experiment and scale it down to at-most 4GB/sec - which is the current maximum mass-storage bandwidth available. Data-flow in this cluster is controlled by a custom designed software framework. It consists of a set of components which can communicate with each other via a common control interface. The software framework also supports the creation of different configurations based on the detectors participating in the HLT. These configurations define a logical data processing “chain” of detector data-analysis components. Data flows through this software chain in a pipelined fashion so that several events can be processed at the same time. An instance of such a chain can run and manage a few thousand physics analysis and data-flow components. The HLT software and the configuration scheme used in the 2011 heavy-ion runs of ALICE, has been discussed in this contribution.

  12. Interfacing Detectors to Triggers And DAQ Electronics

    SciTech Connect

    Crosetto, Dario B.

    1999-05-03

    The complete design of the front-end electronics interfacing LHCb detectors, Level-0 trigger and higher levels of trigger with flexible configuration parameters has been made for (a) ASIC implementation, and (b) FPGA implementation. The importance of approaching designs in technology-independent form becomes essential with the actual rapid electronics evolution. Being able to constrain the entire design to a few types of replicated components: (a) the fully programmable 3D-Flow system, and (b) the configurable front-end circuit described in this article, provides even further advantages because only one or two types of components will need to migrate to the newer technologies. To base on today's technology the design of a system such as the LHCb project that is to begin working in 2006 is not cost-effective. The effort required to migrate to a higher-performance will, in that case, be almost equivalent to completely redesigning the architecture from scratch. The proposed technology independent design with the current configurable front-end module described in this article and the scalable 3D-Flow fully programmable system described elsewhere, based on the study of the evolution of electronics during the past few years and the forecasted advances in the years to come, aims to provide a technology-independent design which lends itself to any technology at any time. In this case, technology independence is based mainly on generic-HDL reusable code which allows a very rapid realization of the state-of-the-art circuits in terms of gate density, power dissipation, and clock frequency. The design of four trigger towers presently fits into an OR3T30 FPGA. Preliminary test results (provided in this paper) meet the functional requirements of LHCb and provide sufficient flexibility to introduce future changes. The complete system design is also provided along with the integration of the front-end design in the entire system and the cost and dimension of the electronics.

  13. FPGA Boot Loader and Scrubber

    NASA Technical Reports Server (NTRS)

    Wade, Randall S.; Jones, Bailey

    2009-01-01

    A computer program loads configuration code into a Xilinx field-programmable gate array (FPGA), reads back and verifies that code, reloads the code if an error is detected, and monitors the performance of the FPGA for errors in the presence of radiation. The program consists mainly of a set of VHDL files (wherein "VHDL" signifies "VHSIC Hardware Description Language" and "VHSIC" signifies "very-high-speed integrated circuit").

  14. The FPGA Pixel Array Detector

    NASA Astrophysics Data System (ADS)

    Hromalik, Marianne S.; Green, Katherine S.; Philipp, Hugh T.; Tate, Mark W.; Gruner, Sol M.

    2013-02-01

    A proposed design for a reconfigurable x-ray Pixel Array Detector (PAD) is described. It operates by integrating a high-end commercial field programmable gate array (FPGA) into a 3-layer device along with a high-resistivity diode detection layer and a custom, application-specific integrated circuit (ASIC) layer. The ASIC layer contains an energy-discriminating photon-counting front end with photon hits streamed directly to the FPGA via a massively parallel, high-speed data connection. FPGA resources can be allocated to perform user defined tasks on the pixel data streams, including the implementation of a direct time autocorrelation function (ACF) with time resolution down to 100 ns. Using the FPGA at the front end to calculate the ACF reduces the required data transfer rate by several orders of magnitude when compared to a fast framing detector. The FPGA-ASIC high-speed interface, as well as the in-FPGA implementation of a real-time ACF for x-ray photon correlation spectroscopy experiments has been designed and simulated. A 16×16 pixel prototype of the ASIC has been fabricated and is being tested.

  15. The rocket-and-wire triggering process: Channel-base currents and ground-level electric fields

    NASA Astrophysics Data System (ADS)

    Ngin, Terry Keo

    Rocket-and-wire triggered lightning flashes were studied from 2011 to 2013 at the International Center for Lightning Research and Testing. Ground-level electric fields and channel-base currents were recorded for 79 rocket launches. An eight-station network of electric field meters along with a milliampere-scale wire-base current measurement and a high-speed video record of the wire ascent allowed the calculation and analysis of the trigger-wire line charge density, generally found to be in the muC m-1 to hundreds of muC m-1 range and to increase quadratically with height. The wire-base currents collected during the wire ascent here are the most comprehensive in the literature to date. The trigger-wire line charge density, electric field at ground level, and characteristics of precursor pulses at the wire tip were examined to determine their usefulness in predicting the success or failure of a triggered-lightning attempt. The usefulness of the PICASSO model of space charge evolution from ground, originally developed by researchers at Paul Sabatier University in the 1980s, as a triggering criterion was also evaluated. An electrostatic model of the corona sheath around the trigger-wire was developed in order to estimate the radial extent of the corona sheath and the charge distribution within the corona sheath as a function of measured electric fields aloft taken from the published literature. The most sensitive measurements to date of channel-base current flowing prior to subsequent return strokes, where the current had generally been considered to be zero, were collected and are analyzed here. The channel-base current before return strokes was found to average 5.3 mA with a 2.8 mA standard deviation prior to 120 measured return strokes.

  16. The use of content addressable memories in the level 2 trigger for the CLAS detector at CEBAF

    SciTech Connect

    Doughty, D.C. Jr.; Hodson, R.F.; Allgood, D.; Bickley, M.; Campbell, S.; Putnam, T.; Spivak, R.; Lemon, S.; Wilson, W.C.

    1996-02-01

    The LEVEL 2 trigger in the CLAS detector will find tracks and associate a momentum and angle with each track within 2 {micro}s after the event. This is done through a hierarchical track finding design in which track segments are found in each drift chamber axial superlayer. An array of 384 custom content addressable (or associative) memories (CAMs) uses independent subfield matching to link these track segments into roads. The track parameters corresponding to each found road are then looked up in a separate memory. The authors present the overall architecture of the LEVEL 2 trigger, the details of how the CAM chip links tracks segments to find roads, and report on the performance of the prototype CAM chips.

  17. Predicting Trigger Level for Ice Jam Flooding of the lower Mohawk River using LiDAR and GIS

    NASA Astrophysics Data System (ADS)

    Foster, J.; Marsellos, A.; Garver, J.

    2011-12-01

    Ice jams are an annual occurrence along the Mohawk River in upstate New York. The jams commonly result in significant flooding especially when the progress of the ice is impeded by obstructions to the channel and flood plain. To minimize flooding hazards it is critical to know the trigger level of flooding so that we can better understand chronic jam points and simulate flooding events as jams occur as the lower Mohawk. A better understanding of jamming and trigger points may facilitate measures to reduce flooding and avoid the costly damage associated with these hazards. To determine the flood trigger level for one segment of the lower Mohawk we used Air-LiDAR elevation data to construct a digital elevation model to simulate a flooding event. The water flood simulation using a LiDAR elevation model allows accurate water level measurements for determining trigger levels of ice dam flooding. The study area comprises three sections of the lower Mohawk River from the (Before location) to the (After location), which are constrained by lock stations centered at the New York State Canal System Lock 9 (E9 Lock) and the B&M Rail Bridge at the Schenectady International (SI) Plant. This area is notorious for ice jams including one that resulted in a major flooding event on January 25th, 2010 which resulted in flood levels at 74.4 m in the upper portion of the second section of the study area (Lock 9) and at 73.4 m in the lower portion (SI plant). Minimum and maximum elevation levels were found to determine the values at which up stream water builds up and when flooding occurs. From these values, we are able to predict the flooding as the ice jam builds up and breaks as it progresses downstream. Similar methodology is applied to find the trigger points for flooding along other sections of the Mohawk River constrained by lock stations, and it may provide critical knowledge as to how to better manage the hazard of flooding due to ice jams.

  18. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  19. FPGA based pulsed NQR spectrometer

    NASA Astrophysics Data System (ADS)

    Hemnani, Preeti; Rajarajan, A. K.; Joshi, Gopal; Motiwala, Paresh D.; Ravindranath, S. V. G.

    2014-04-01

    An NQR spectrometer for the frequency range of 1 MHz to 5 MHZ has been designed constructed and tested using an FPGA module. Consisting of four modules viz. Transmitter, Probe, Receiver and computer controlled (FPGA & Software) module containing frequency synthesizer, pulse programmer, mixer, detection and display, the instrument is capable of exciting nuclei with a power of 200W and can detect signal of a few microvolts in strength. 14N signal from NaNO2 has been observed with the expected signal strength.

  20. Digital electronics for the inclusion of shower max and preshower wire data in the CDF second-level trigger

    SciTech Connect

    Dawson, J.W.; Byrum, K.L.; Haberichter, W.N.; Nodulman, L.J.; Wicklund, A.B.; Turner, K.J.; Gerdes, D.W.

    1993-07-01

    As part of the upgrade program at CDF, electronics has been built to bring the shower max (CES) and preshower (CPR) data into the trigger at level 2. After each crossing, 384 bits from shower max and 192 from the preshower wires are latched. Data from tracks are bussed to this module to provide the wire address and momentum which are then successively compared to the wire data in large look-up tables. Approximately 50 nanoseconds is required to determine a match, write the results in FIFO, and make the results available to track memory. Monte Carlo analysis has indicated that an increase in efficiency of a factor of three in triggering on b decays will be achieved with this hardware.

  1. Performance of Tracking, b-tagging and Jet/MET reconstruction at the CMS High Level Trigger

    NASA Astrophysics Data System (ADS)

    Tosi, Mia

    2015-12-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. In 2015, the center-of-mass energy of proton-proton collisions will reach 13 TeV up to an unprecedented luminosity of 1 × 1034 cm-2s-1. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capabilities. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Tracking algorithms are widely used in the HLT in the object reconstruction through particle-flow techniques as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II with a large contribution from out-of-time particles. In order to cope with tougher conditions the tracking and vertexing techniques used in 2012 have been largely improved in terms of timing and efficiency in order to keep the physics reach at the level of Run I conditions. We will present the performance of these newly developed algorithms, discussing their impact on the b-tagging performances as well as on the jet and missing transverse energy reconstruction.

  2. Impact of sea-level rise on earthquake and landslide triggering offshore the Alentejo margin (SW Iberia)

    NASA Astrophysics Data System (ADS)

    Neves, M. C.; Roque, C.; Luttrell, K. M.; Vázquez, J. T.; Alonso, B.

    2016-07-01

    Earthquakes and submarine landslides are recurrent and widespread manifestations of fault activity offshore SW Iberia. The present work tests the effects of sea-level rise on offshore fault systems using Coulomb stress change calculations across the Alentejo margin. Large-scale faults capable of generating large earthquakes and tsunamis in the region, especially NE-SW trending thrusts and WNW-ESE trending dextral strike-slip faults imaged at basement depths, are either blocked or unaffected by flexural effects related to sea-level changes. Large-magnitude earthquakes occurring along these structures may, therefore, be less frequent during periods of sea-level rise. In contrast, sea-level rise promotes shallow fault ruptures within the sedimentary sequence along the continental slope and upper rise within distances of <100 km from the coast. The results suggest that the occurrence of continental slope failures may either increase (if triggered by shallow fault ruptures) or decrease (if triggered by deep fault ruptures) as a result of sea-level rise. Moreover, observations of slope failures affecting the area of the Sines contourite drift highlight the role of sediment properties as preconditioning factors in this region.

  3. Onboard FPGA-based SAR processing for future spaceborne systems

    NASA Technical Reports Server (NTRS)

    Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce; Rogez, Francois; Rosen, Paul; Shah, Biren; Taft, Stephanie

    2004-01-01

    We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.

  4. CMS software architecture. Software framework, services and persistency in high level trigger, reconstruction and analysis

    NASA Astrophysics Data System (ADS)

    Innocente, V.; Silvestris, L.; Stickland, D.; CMS Software Group

    2001-10-01

    This paper describes the design of a resilient and flexible software architecture that has been developed to satisfy the data processing requirements of a large HEP experiment, CMS, currently being constructed at the LHC machine at CERN. We describe various components of a software framework that allows integration of physics modules and which can be easily adapted for use in different processing environments both real-time (online trigger) and offline (event reconstruction and analysis). Features such as the mechanisms for scheduling algorithms, configuring the application and managing the dependences among modules are described in detail. In particular, a major effort has been placed on providing a service for managing persistent data and the experience using a commercial ODBMS (Objectivity/DB) is therefore described in detail.

  5. STRS SpaceWire FPGA Module

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Taylor, Gregory H.; Lang, Minh; Stern, Ryan A.

    2011-01-01

    An FPGA module leverages the previous work from Goddard Space Flight Center (GSFC) relating to NASA s Space Telecommunications Radio System (STRS) project. The STRS SpaceWire FPGA Module is written in the Verilog Register Transfer Level (RTL) language, and it encapsulates an unmodified GSFC core (which is written in VHDL). The module has the necessary inputs/outputs (I/Os) and parameters to integrate seamlessly with the SPARC I/O FPGA Interface module (also developed for the STRS operating environment, OE). Software running on the SPARC processor can access the configuration and status registers within the SpaceWire module. This allows software to control and monitor the SpaceWire functions, but it is also used to give software direct access to what is transmitted and received through the link. SpaceWire data characters can be sent/received through the software interface, as well as through the dedicated interface on the GSFC core. Similarly, SpaceWire time codes can be sent/received through the software interface or through a dedicated interface on the core. This innovation is designed for plug-and-play integration in the STRS OE. The SpaceWire module simplifies the interfaces to the GSFC core, and synchronizes all I/O to a single clock. An interrupt output (with optional masking) identifies time-sensitive events within the module. Test modes were added to allow internal loopback of the SpaceWire link and internal loopback of the client-side data interface.

  6. Triggering for Magnetic Field Measurements of the LCLS Undulators

    SciTech Connect

    Hacker, Kirsten

    2010-12-13

    A triggering system for magnetic field measurements of the LCLS undulators has been built with a National Instruments PXI-1002 and a Xylinx FPGA board. The system generates single triggers at specified positions, regardless of encoder sensor jitter about a linear scale.

  7. ENABLE -- A systolic 2nd level trigger processor for track finding and e/[pi] discrimination for ATLAS/LHC

    SciTech Connect

    Klefenz, F.; Noffz, K.H.; Zoz, R. . Lehrstuhl fuer Informatik V); Maenner, R. . Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen)

    1994-08-01

    The Enable Machine is a systolic 2nd level trigger processor for the transition radiation detector (TRD) of ATLAS/LHC. It is developed within the EAST/RD-11 collaboration at CERN. The task of the processor is to find electron tracks and to reject pion tracks according to the EAST benchmark algorithm in less than 10[mu]s. Track are identified by template matching in a ([psi],z) region of interest (RoI) selected by a 1st level trigger. In the ([psi],z) plane tracks of constant curvature are straight lines. The relevant lines form mask templates. Track identification is done by histogramming the coincidences of the templates and the RoI data for each possible track. The Enable Machine is an array processor that handles tracks of the same slope in parallel, and tracks of different slope in a pipeline. It is composed of two units, the Enable histogrammer unit and the Enable z/[psi]-board. The interface daughter board is equipped with a HIPPI-interface developed at JINR/-Dubna, and Xilinx 'corner turning' data converter chips. Enable uses programmable gate arrays (XILINX) for histogramming and synchronous SRAMs for pattern storage. With a clock rate of 40 MHz the trigger decision time is 6.5 [mu]s and the latency 7.0 [mu]s. The Enable machine is scalable in the RoI size as well as in the number of tracks processed. It can be adapted to different recognition tasks and detector setups. The prototype of the Enable Machine has been tested in a beam time of the RD6 collaboration at CERN in October 1993.

  8. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    NASA Astrophysics Data System (ADS)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  9. Small Microprocessor for ASIC or FPGA Implementation

    NASA Technical Reports Server (NTRS)

    Kleyner, Igor; Katz, Richard; Blair-Smith, Hugh

    2011-01-01

    A small microprocessor, suitable for use in applications in which high reliability is required, was designed to be implemented in either an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). The design is based on commercial microprocessor architecture, making it possible to use available software development tools and thereby to implement the microprocessor at relatively low cost. The design features enhancements, including trapping during execution of illegal instructions. The internal structure of the design yields relatively high performance, with a significant decrease, relative to other microprocessors that perform the same functions, in the number of microcycles needed to execute macroinstructions. The problem meant to be solved in designing this microprocessor was to provide a modest level of computational capability in a general-purpose processor while adding as little as possible to the power demand, size, and weight of a system into which the microprocessor would be incorporated. As designed, this microprocessor consumes very little power and occupies only a small portion of a typical modern ASIC or FPGA. The microprocessor operates at a rate of about 4 million instructions per second with clock frequency of 20 MHz.

  10. Correlation of serum estradiol level on the day of ovulation trigger with the reproductive outcome of intracytoplasmic sperm injection

    PubMed Central

    Siddhartha, N.; Reddy, N. Sanjeeva; Pandurangi, Monna; Tamizharasi, M.; Radha, V.; Kanimozhi, K.

    2016-01-01

    BACKGROUND: Serum estradiol (E2) levels are measured in in vitro fertilization/intracytoplasmic sperm injection (IVF/ICSI), to assess the ovarian response and to predict ovarian hyperstimulation syndrome. The impact of peak E2 levels on IVF-ICSI outcome was found to be inconsistent in the previous studies. AIM: To evaluate the impact of the serum E2 levels on the day of ovulation trigger with the reproductive outcome of ICSI. SETTINGS AND DESIGN: Retrospective observational study. ART Center, at a Tertiary Care University Teaching Hospital. SUBJECTS AND METHODS: Eighty-nine infertile women, who underwent ICSI with fresh embryo transfer over a period of 3 years, were included in the study. The study subjects were grouped based on the serum E2 level on the day of ovulation trigger:- Group I - <1000 pg/ml, Group II - 1000–2000 pg/ml, Group III – 2000.1-3000 pg/ml, Group IV – 3000.1–4000 pg/ml, and Group V >4000 pg/ml. The baseline characteristics and controlled ovarian hyperstimulation (COH) outcome were compared among the study groups. STATISTICAL ANALYSIS USED: Chi-square test, Student's t-test, ANOVA, and logistic regression analysis. RESULTS: The study groups were comparable with regard to age, body mass index, ovarian reserve. Group V had significantly higher number of oocytes retrieved than I and II (18.90 vs. 11.36 and 11.33; P = 0.009). Group IV showed significantly higher fertilization rate than I, III, and V; (92.23 vs. 77.43, 75.52, 75.73; P = 0.028). There were no significant differences in the implantation rates (P = 0.368) and pregnancy rates (P = 0.368). CONCLUSION: Higher E2 levels on the day of ovulation trigger would predict increased oocyte yield after COH. E2 levels in the range of 3000–4000 pg/ml would probably predict increased fertilization and pregnancies in ICSI cycles. PMID:27110074

  11. FPGA curved track fitters and a multiplierless fitter scheme

    SciTech Connect

    Wu, Jinyuan; Wang, M.; Gottschalk, E.; Shi, Z.; /Fermilab

    2007-01-01

    The standard least-squares curved track fitting process is tailored for FPGA implementation so that only integer multiplications and additions are needed. To further eliminate multiplication, coefficients in the fitting matrices are carefully chosen so that only shift and accumulation operations are used in the process. Comparison in an example application shows that the fitting errors of the multiplierless implementation are less than 4% bigger than the fitting errors of the exact least-squares algorithm. The implementation is suitable for low-cost, low-power applications in high energy physics detector trigger systems.

  12. Target-triggered triple isothermal cascade amplification strategy for ultrasensitive microRNA-21 detection at sub-attomole level.

    PubMed

    Cheng, Fang-Fang; Jiang, Nan; Li, Xiaoyan; Zhang, Li; Hu, Lihui; Chen, Xiaojun; Jiang, Li-Ping; Abdel-Halim, E S; Zhu, Jun-Jie

    2016-11-15

    MicroRNA-21 (miR-21) is a promising diagnostic biomarker for breast cancer screening and disease progression, thus the method for the sensitive and selective detection of miR-21 is vital to its clinical diagnosis. Herein, we develop a novel method to quantify miR-21 levels as low as attomolar sensitivity by a target-triggered triple isothermal cascade amplification (3TICA) strategy. An ingenious unimolecular DNA template with three functional parts has been designed: 5'-fragment as the miR-21 recognition unit, middle fragment as the miR-21 analogue amplification unit, and 3'-fragment as the 8-17 DNAzyme production unit. Triggered by miR-21 and accompanied by polymerase-nicking enzyme cascade, new miR-21 analogues autonomously generated for the successive re-triggering and cleavage process. Simultaneously, the 8-17 DNAzyme-contained sequence could be exponentially released and activated for the second cyclic cleavage toward a specific ribonucleotide (rA)-contained substrate, inducing a remarkably amplified generation of HRP-mimicking DNAzyme in the presence of hemin. Finally, the amperometric technique was used to record the catalytic reduction current of 3,3',5,5'-tetramethylbenzidine (TMB) in the presence of H2O2. The increase in the steady-state current was proportional with the increase of the miR-21 concentration from 1 aM to 100 pM. An ultra-low detection limit of 0.5 aM with an excellent selectivity for even discriminating differences between 1-base mismatched target and miR-21 was achieved. This simple and cost-effective 3TICA strategy is promising for the detection of any short oligonucleotides, simply by altering the target recognition unit in the template sequence. PMID:27311114

  13. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph; Mortensen, Dale

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. The extension of STRS to the SSP hardware will promote easier waveform reconfiguration and reuse. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. A FPGA-based transmit waveform implementation of the proposed standard interfaces on a laboratory breadboard SDR will be discussed.

  14. Using community level strategies to reduce asthma attacks triggered by outdoor air pollution: a case crossover analysis

    PubMed Central

    2014-01-01

    pollutant exposure is important to effectively attribute risk for triggering of an asthma attack, especially as concentrations increase. Improved asthma action plans for Houston individuals should warn of these pollutants, their trends, correlation and cumulative effects. Our Houston based study identifies nitrogen dioxide levels and the three-day exposure to ozone to be of concern whereas current single pollutant based national standards do not. PMID:25012280

  15. Critical width of tidal flats triggers marsh collapse in the absence of sea-level rise.

    PubMed

    Mariotti, Giulio; Fagherazzi, Sergio

    2013-04-01

    High rates of wave-induced erosion along salt marsh boundaries challenge the idea that marsh survival is dictated by the competition between vertical sediment accretion and relative sea-level rise. Because waves pounding marshes are often locally generated in enclosed basins, the depth and width of surrounding tidal flats have a pivoting control on marsh erosion. Here, we show the existence of a threshold width for tidal flats bordering salt marshes. Once this threshold is exceeded, irreversible marsh erosion takes place even in the absence of sea-level rise. This catastrophic collapse occurs because of the positive feedbacks among tidal flat widening by wave-induced marsh erosion, tidal flat deepening driven by wave bed shear stress, and local wind wave generation. The threshold width is determined by analyzing the 50-y evolution of 54 marsh basins along the US Atlantic Coast. The presence of a critical basin width is predicted by a dynamic model that accounts for both horizontal marsh migration and vertical adjustment of marshes and tidal flats. Variability in sediment supply, rather than in relative sea-level rise or wind regime, explains the different critical width, and hence erosion vulnerability, found at different sites. We conclude that sediment starvation of coastlines produced by river dredging and damming is a major anthropogenic driver of marsh loss at the study sites and generates effects at least comparable to the accelerating sea-level rise due to global warming. PMID:23513219

  16. Critical width of tidal flats triggers marsh collapse in the absence of sea-level rise

    PubMed Central

    Mariotti, Giulio; Fagherazzi, Sergio

    2013-01-01

    High rates of wave-induced erosion along salt marsh boundaries challenge the idea that marsh survival is dictated by the competition between vertical sediment accretion and relative sea-level rise. Because waves pounding marshes are often locally generated in enclosed basins, the depth and width of surrounding tidal flats have a pivoting control on marsh erosion. Here, we show the existence of a threshold width for tidal flats bordering salt marshes. Once this threshold is exceeded, irreversible marsh erosion takes place even in the absence of sea-level rise. This catastrophic collapse occurs because of the positive feedbacks among tidal flat widening by wave-induced marsh erosion, tidal flat deepening driven by wave bed shear stress, and local wind wave generation. The threshold width is determined by analyzing the 50-y evolution of 54 marsh basins along the US Atlantic Coast. The presence of a critical basin width is predicted by a dynamic model that accounts for both horizontal marsh migration and vertical adjustment of marshes and tidal flats. Variability in sediment supply, rather than in relative sea-level rise or wind regime, explains the different critical width, and hence erosion vulnerability, found at different sites. We conclude that sediment starvation of coastlines produced by river dredging and damming is a major anthropogenic driver of marsh loss at the study sites and generates effects at least comparable to the accelerating sea-level rise due to global warming. PMID:23513219

  17. Between Product Development and Mass Production: Tensions as Triggers for Concept-Level Learning

    ERIC Educational Resources Information Center

    Jalonen, Meri; Ristimäki, Päivi; Toiviainen, Hanna; Pulkkis, Anneli; Lohtander, Mika

    2016-01-01

    Purpose: This paper aims to analyze learning in organizational transformations by focusing on concept-level tensions faced in two young companies, which were searching for a reorientation of activity with a production network between innovative product development and efficient mass production. Design/methodology/approach: An intervention-based…

  18. Neurotoxicity and aggressiveness triggered by low-level lead in children: a review.

    PubMed

    Olympio, Kelly Polido Kaneshiro; Gonçalves, Claudia; Günther, Wanda Maria Risso; Bechara, Etelvino José Henriques

    2009-09-01

    Lead-induced neurotoxicity acquired by low-level long-term exposure has special relevance for children. A plethora of recent reports has demonstrated a direct link between low-level lead exposure and deficits in the neurobehavioral-cognitive performance manifested from childhood through adolescence. In many studies, aggressiveness and delinquency have also been suggested as symptoms of lead poisoning. Several environmental, occupational and domestic sources of contaminant lead and consequent health risks are largely identified and understood, but the occurrences of lead poisoning remain numerous. There is an urgent need for public health policies to prevent lead poisoning so as to reduce individual and societal damages and losses. In this paper we describe unsuspected sources of contaminant lead, discuss the economic losses and urban violence possibly associated with lead contamination and review the molecular basis of lead-induced neurotoxicity, emphasizing its effects on the social behavior, delinquency and IQ of children and adolescents. PMID:20058837

  19. Fast particles identification in programmable form at level-0 trigger by means of the 3D-Flow system

    SciTech Connect

    Crosetto, Dario B.

    1998-10-30

    The 3D-Flow Processor system is a new, technology-independent concept in very fast, real-time system architectures. Based on either an FPGA or an ASIC implementation, it can address, in a fully programmable manner, applications where commercially available processors would fail because of throughput requirements. Possible applications include filtering-algorithms (pattern recognition) from the input of multiple sensors, as well as moving any input validated by these filtering-algorithms to a single output channel. Both operations can easily be implemented on a 3D-Flow system to achieve a real-time processing system with a very short lag time. This system can be built either with off-the-shelf FPGAs or, for higher data rates, with CMOS chips containing 4 to 16 processors each. The basic building block of the system, a 3D-Flow processor, has been successfully designed in VHDL code written in ''Generic HDL'' (mostly made of reusable blocks that are synthesizable in different technologies, or FPGAs), to produce a netlist for a four-processor ASIC featuring 0.35 micron CBA (Ceil Base Array) technology at 3.3 Volts, 884 mW power dissipation at 60 MHz and 63.75 mm sq. die size. The same VHDL code has been targeted to three FPGA manufacturers (Altera EPF10K250A, ORCA-Lucent Technologies 0R3T165 and Xilinx XCV1000). A complete set of software tools, the 3D-Flow System Manager, equally applicable to ASIC or FPGA implementations, has been produced to provide full system simulation, application development, real-time monitoring, and run-time fault recovery. Today's technology can accommodate 16 processors per chip in a medium size die, at a cost per processor of less than $5 based on the current silicon die/size technology cost.

  20. Characterization of the SIDDHARTA-2 second level trigger detector prototype based on scintillators coupled to a prism reflector light guide

    NASA Astrophysics Data System (ADS)

    Bazzi, M.; Berucci, C.; Curceanu, C.; d'Uffizi, A.; Iliescu, M.; Sbardella, E.; Scordo, A.; Shi, H.; Sirghi, F.; Tatsuno, H.; Tucakovic, I.

    2013-11-01

    The SIDDHARTA experiment at the DAΦNE collider of LNF-INFN performed in 2009 high precision measurements of kaonic hydrogen and kaonic helium atomic transitions. To determine the bar KN isospin dependent scattering lenghts an important measurement, namely the kaonic deuterium one, is, however, still missing. Due to the very low expected yield of the kaonic deuterium Kα transition, a major improvement in the signal over background ratio is needed. To achieve a further background reduction, a second level trigger, based on the detection of charged pions produced by the K- absorption on various materials, including the target gas nuclei, is planned to be implemented in the future SIDDHARTA-2 experiment. For shielding-related geometrical limitations, a single side of the scintillators can be accessed; in order to reach a good time resolution and uniform efficiency, a both-end readout was then realized with complex multi-reflection light guides. In this work, the results of the tests made on a detector prototype, performed on the πM-1 beamline of the Paul Scherrer Institute (Switzerland), are presented. The tests had the goal to determine the efficiency and the time resolution for pions, which should comply with the minimum required values of 90% and 1 ns (FWHM) respectively. The obtained results, 96% efficiency and 750 ps FWHM for 170 MeV/c momentum pions, qualify the prototype as an excellent second level trigger for the SIDDHARTA-2 experiment. Similar results for 170 MeV/c momentum muons and electrons are also presented.

  1. An alert system for triggering different levels of coastal management urgency: Tunisia case study using rapid environmental assessment data.

    PubMed

    Price, A R G; Jaoui, K; Pearson, M P; Jeudy de Grissac, A

    2014-03-15

    Rapid environmental assessment (REA) involves scoring abundances of ecosystems/species groups and magnitude of pressures, concurrently, using the same logarithmic (0-6) assessment scale. We demonstrate the utility of REA data for an alert system identifying different levels of coastal management concern. Thresholds set for abundances/magnitudes, when crossed, trigger proposed responses. Kerkennah, Tunisia, our case study, has significant natural assets (e.g. exceptional seagrass and invertebrate abundances), subjected to varying levels of disturbance and management concern. Using REA thresholds set, fishing, green algae/eutrophication and oil occurred at 'low' levels (scores 0-1): management not (currently) necessary. Construction and wood litter prevailed at 'moderate' levels (scores 2-4): management alerted for (further) monitoring. Solid waste densities were 'high' (scores 5-6): management alerted for action; quantities of rubbish were substantial (20-200 items m⁻¹ beach) but not unprecedented. REA is considered a robust methodology and complementary to other rapid assessment techniques, environmental frameworks and indicators of ecosystem condition. PMID:24512758

  2. The Development of FPGA-Based Pseudo-Iterative Clustering Algorithms

    NASA Astrophysics Data System (ADS)

    Drueke, Elizabeth; Fisher, Wade; Plucinski, Pawel

    2016-03-01

    The Large Hadron Collider (LHC) in Geneva, Switzerland, is set to undergo major upgrades in 2025 in the form of the High-Luminosity Large Hadron Collider (HL-LHC). In particular, several hardware upgrades are proposed to the ATLAS detector, one of the two general purpose detectors. These hardware upgrades include, but are not limited to, a new hardware-level clustering algorithm, to be performed by a field programmable gate array, or FPGA. In this study, we develop that clustering algorithm and compare the output to a Python-implemented topoclustering algorithm developed at the University of Oregon. Here, we present the agreement between the FPGA output and expected output, with particular attention to the time required by the FPGA to complete the algorithm and other limitations set by the FPGA itself.

  3. Validation of an FPGA fault simulator.

    SciTech Connect

    Wirthlin, M. J.; Johnson, D. E.; Graham, P. S.; Caffrey, M. P.

    2003-01-01

    This work describes the radiation testing of a fault simulation tool used to study the behavior of FPGA circuits in the presence of configuration memory upsets . There is increasing interest in the use of Field Programmable Gate Arrays (FPGAs) in space-based applications such as remote sensing[1] . The use of reconfigurable Field Programmable Gate Arrays (FPGAs) within a spacecraft allows the use of digital circuits that are both application-specific and reprogrammable. Unlike application-specific integrated circuits (ASICs), FPGAs can be configured after the spacecraft has been launched . This flexibility allows the same FPGA resources to be used for multiple instruments, missions, or changing spacecraft objectives . Errors in an FPGA design can be resolved by fixing the incorrect design and reconfiguring the FPGA with an updated configuration bitstream . Further, custom circuit designs can be created to avoid FPGA resources that have failed during the course of the spacecraft mission .

  4. Monitoring the data quality of the real-time event reconstruction in the ALICE High Level Trigger

    NASA Astrophysics Data System (ADS)

    Austrheim Erdal, Hege; Richther, Matthias; Szostak, Artur; Toia, Alberica

    2012-12-01

    ALICE is a dedicated heavy ion experiment at the CERN LHC. The ALICE High Level Trigger was designed to select events with desirable physics properties. Data from several of the major subdetectors in ALICE are processed by the HLT for real-time event reconstruction, for instance the Inner Tracking System, the Time Projection Chamber, the electromagnetc calorimeters, the Transition Radiation Detector and the muon spectrometer. The HLT reconstructs events in real-time and thus provides input for trigger algorithms. It is necessary to monitor the quality of the reconstruction where one focuses on track and event properties. Also, HLT implemented data compression for the TPC during the heavy ion data taking in 2011 to reduce the data rate from the ALICE detector. The key for the data compression is to store clusters (spacepoints) calculated by HLT rather than storing raw data. It is thus very important to monitor the cluster finder performance as a way to monitor the data compression. The data monitoring is divided into two stages. The first stage is performed during data taking. A part of the HLT production chain is dedicated to performing online monitoring and facilities are available in the HLT production cluster to have real-time access to the reconstructed events in the ALICE control room. This includes track and event properties, and in addition, this facility gives a way to display a small fraction of the reconstructed events in an online display. The second part of the monitoring is performed after the data has been transferred to permanent storage. After a post-process of the real-time reconstructed data, one can look in more detail at the cluster finder performance, the quality of the reconstruction of tracks, vertices and vertex position. The monitoring solution is presented in detail, with special attention to the heavy ion data taking of 2011.

  5. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. Current standards were researched and new standard interfaces were proposed. The implementation of the proposed standard interfaces on a laboratory breadboard SDR will be presented.

  6. FPGA design and implementation of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Yang, Zhihui; Zhou, Gang

    2015-12-01

    In this paper , we choose four different variances of 1,3,6 and 12 to conduct FPGA design with three kinds of Gaussian filtering algorithm ,they are implementing Gaussian filter with a Gaussian filter template, Gaussian filter approximation with mean filtering and Gaussian filter approximation with IIR filtering. By waveform simulation and synthesis, we get the processing results on the experimental image and the consumption of FPGA resources of the three methods. We set the result of Gaussian filter used in matlab as standard to get the result error. By comparing the FPGA resources and the error of FPGA implementation methods, we get the best FPGA design to achieve a Gaussian filter. Conclusions can be drawn based on the results we have already got. When the variance is small, the FPGA resources is enough for the algorithm to implement Gaussian filter with a Gaussian filter template which is the best choice. But when the variance is so large that there is no more FPGA resources, we can chose the mean to approximate Gaussian filter with IIR filtering.

  7. A digital frequency stabilization system of external cavity diode laser based on LabVIEW FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Zhuohuan; Hu, Zhaohui; Qi, Lu; Wang, Tao

    2015-10-01

    Frequency stabilization for external cavity diode laser has played an important role in physics research. Many laser frequency locking solutions have been proposed by researchers. Traditionally, the locking process was accomplished by analog system, which has fast feedback control response speed. However, analog system is susceptible to the effects of environment. In order to improve the automation level and reliability of the frequency stabilization system, we take a grating-feedback external cavity diode laser as the laser source and set up a digital frequency stabilization system based on National Instrument's FPGA (NI FPGA). The system consists of a saturated absorption frequency stabilization of beam path, a differential photoelectric detector, a NI FPGA board and a host computer. Many functions, such as piezoelectric transducer (PZT) sweeping, atomic saturation absorption signal acquisition, signal peak identification, error signal obtaining and laser PZT voltage feedback controlling, are totally completed by LabVIEW FPGA program. Compared with the analog system, the system built by the logic gate circuits, performs stable and reliable. User interface programmed by LabVIEW is friendly. Besides, benefited from the characteristics of reconfiguration, the LabVIEW program is good at transplanting in other NI FPGA boards. Most of all, the system periodically checks the error signal. Once the abnormal error signal is detected, FPGA will restart frequency stabilization process without manual control. Through detecting the fluctuation of error signal of the atomic saturation absorption spectrum line in the frequency locking state, we can infer that the laser frequency stability can reach 1MHz.

  8. FPGA Simulation Engine for Customized Construction of Neural Microcircuits

    PubMed Central

    Blair, Hugh T.; Cong, Jason; Wu, Di

    2014-01-01

    In this paper we describe an FPGA-based platform for high-performance and low-power simulation of neural microcircuits composed from integrate-and-fire (IAF) neurons. Based on high-level synthesis, our platform uses design templates to map hierarchies of neuron model to logic fabrics. This approach bypasses high design complexity and enables easy optimization and design space exploration. We demonstrate the benefits of our platform by simulating a variety of neural microcircuits that perform oscillatory path integration, which evidence suggests may be a critical building block of the navigation system inside a rodent’s brain. Experiments show that our FPGA simulation engine for oscillatory neural microcircuits can achieve up to 39× speedup compared to software benchmarks on commodity CPU, and 232× energy reduction compared to embedded ARM core. PMID:25584120

  9. Fluoride-Triggered Ring-Opening of Photochromic Diarylpyrans into Merocyanine Dyes: Naked-Eye Sensing in Subppm Levels.

    PubMed

    Mukhopadhyay, Arindam; Maka, Vijay Kumar; Moorthy, Jarugu Narasimha

    2016-09-01

    The fluoride-mediated desilylation reaction has been exploited, for the first time, to trigger ring-opening of photochromic diarylbenzo-/naphthopyrans into highly colored anionic merocyanine dyes with high molar absorptivities to permit naked-eye sensing. The absorption spectral shifts, i.e., differences in the absorption maxima of colorless and colored forms, observed for a rationally designed set of silyloxy-substituted diarylpyrans subsequent to fluoride-induced ring opening are remarkably high (330-480 nm), and are unknown for any colorimetric probe. In particular, the disilyloxy-substituted diphenylnaphthopyran and its analog, in which the diphenyl groups are fused in the form of fluorene, allows "naked-eye" detection of fluoride in subppm levels (<1.0 ppm) in THF as well as in DMSO-H2O. The sensing is specific for fluoride among various other anions. This approach for colorimetric sensing of fluoride by ring-opening of the otherwise photochromic benzo-/naphthopyrans is heretofore unprecedented. PMID:27447293

  10. Implementation of a level 1 trigger system using high speed serial (VXS) techniques for the 12GeV high luminosity experimental programs at Thomas Jefferson National Accelerator Facility

    SciTech Connect

    C. Cuevas, B. Raydo, H. Dong, A. Gupta, F.J. Barbosa, J. Wilson, W.M. Taylor, E. Jastrzembski, D. Abbott

    2009-11-01

    We will demonstrate a hardware and firmware solution for a complete fully pipelined multi-crate trigger system that takes advantage of the elegant high speed VXS serial extensions for VME. This trigger system includes three sections starting with the front end crate trigger processor (CTP), a global Sub-System Processor (SSP) and a Trigger Supervisor that manages the timing, synchronization and front end event readout. Within a front end crate, trigger information is gathered from each 16 Channel, 12 bit Flash ADC module at 4 nS intervals via the VXS backplane, to a Crate Trigger Processor (CTP). Each Crate Trigger Processor receives these 500 MB/S VXS links from the 16 FADC-250 modules, aligns skewed data inherent of Aurora protocol, and performs real time crate level trigger algorithms. The algorithm results are encoded using a Reed-Solomon technique and transmission of this Level 1 trigger data is sent to the SSP using a multi-fiber link. The multi-fiber link achieves an aggregate trigger data transfer rate to the global trigger at 8 Gb/s. The SSP receives and decodes Reed-Solomon error correcting transmission from each crate, aligns the data, and performs the global level trigger algorithms. The entire trigger system is synchronous and operates at 250 MHz with the Trigger Supervisor managing not only the front end event readout, but also the distribution of the critical timing clocks, synchronization signals, and the global trigger signals to each front end readout crate. These signals are distributed to the front end crates on a separate fiber link and each crate is synchronized using a unique encoding scheme to guarantee that each front end crate is synchronous with a fixed latency, independent of the distance between each crate. The overall trigger signal latency is <3 uS, and the proposed 12GeV experiments at Jefferson Lab require up to 200KHz Level 1 trigger rate.

  11. Rad-Hard/HI-REL FPGA

    NASA Technical Reports Server (NTRS)

    Wang, Jih-Jong; Cronquist, Brian E.; McGowan, John E.; Katz, Richard B.

    1997-01-01

    The goals for a radiation hardened (RAD-HARD) and high reliability (HI-REL) field programmable gate array (FPGA) are described. The first qualified manufacturer list (QML) radiation hardened RH1280 and RH1020 were developed. The total radiation dose and single event effects observed on the antifuse FPGA RH1280 are reported on. Tradeoffs and the limitations in the single event upset hardening are discussed.

  12. Tethered Forth system for FPGA applications

    NASA Astrophysics Data System (ADS)

    Goździkowski, Paweł; Zabołotny, Wojciech M.

    2013-10-01

    This paper presents the tethered Forth system dedicated for testing and debugging of FPGA based electronic systems. Use of the Forth language allows to interactively develop and run complex testing or debugging routines. The solution is based on a small, 16-bit soft core CPU, used to implement the Forth Virtual Machine. Thanks to the use of the tethered Forth model it is possible to minimize usage of the internal RAM memory in the FPGA. The function of the intelligent terminal, which is an essential part of the tethered Forth system, may be fulfilled by the standard PC computer or by the smartphone. System is implemented in Python (the software for intelligent terminal), and in VHDL (the IP core for FPGA), so it can be easily ported to different hardware platforms. The connection between the terminal and FPGA may be established and disconnected many times without disturbing the state of the FPGA based system. The presented system has been verified in the hardware, and may be used as a tool for debugging, testing and even implementing of control algorithms for FPGA based systems.

  13. A 96-channel FPGA-based time-to-digital converter

    SciTech Connect

    Bogdan, Mircea; Frisch, Henry; Heintz, Mary; Paramonov, Alexander; Sanders, Harold; Chappa, Steve; DeMaat, Robert; Klein, Rod; Miao, Ting; Phillips, Thomas J; Wilson, Peter

    2005-02-01

    We describe an FPGA-based, 96-channel, time-to-digital converter (TDC) intended for use with the Central Outer Tracker (COT) [1] in the CDF Experiment [2] at the Fermilab Tevatron. The COT system is digitized and read out by 315 TDC cards, each serving 96 wires of the chamber. The TDC is physically configured as a 9U VME card. The functionality is almost entirely programmed in firmware in two Altera Stratix FPGA’s. The special capabilities of this device are the availability of 840 MHz LVDS inputs, multiple phase-locked clock modules, and abundant memory. The TDC system operates with an input resolution of 1.2 ns, a minimum input pulse width of 4.8 ns and a minimum separation of 4.8 ns between pulses. Each input can accept up to 7 hits per collision. The time-to-digital conversion is done by first sampling each of the 96 inputs in 1.2-ns bins and filling a circular memory; the memory addresses of logical transitions (edges) in the input data are then translated into the time of arrival and width of the COT pulses. Memory pipelines with a depth of 5.5 μs allow deadtime-less operation in the first-level trigger; the data are multiple-buffered to diminish deadtime in the second-level trigger. The complete process of edge-detection and filling of buffers for readout takes 12 μs. The TDC VME interface allows a 64-bit Chain Block Transfer of multiple boards in a crate with transfer-rates up to 47 Mbytes/sec. The TDC also contains a separately-programmed data path that produces prompt trigger data every Tevatron crossing. The trigger bits are clocked onto the P3 VME backplane connector with a 22-ns clock for transmission to the trigger. The full TDC design and multi-card test results are described. The physical simplicity ensures low-maintenance; the functionality being in firmware allows reprogramming for other applications.

  14. OpenACC to FPGA: A Framework for Directive-based High-Performance Reconfigurable Computing

    SciTech Connect

    Lee, Seyong; Vetter, Jeffrey S

    2016-01-01

    This paper presents a directive-based, high-level programming framework for high-performance reconfigurable computing. It takes a standard, portable OpenACC C program as input and generates a hardware configuration file for execution on FPGAs. We implemented this prototype system using our open-source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler. Internally, the design of OpenARC uses a high- level intermediate representation that separates concerns of program representation from underlying architectures, which facilitates portability of OpenARC. In fact, this design allowed us to create the OpenACC-to-FPGA translation framework with minimal extensions to our existing system. In addition, we show that our proposed FPGA-specific compiler optimizations and novel OpenACC pragma extensions assist the compiler in generating more efficient FPGA hardware configuration files. Our empirical evaluation on an Altera Stratix V FPGA with eight OpenACC benchmarks demonstrate the benefits of our strategy. To demonstrate the portability of OpenARC, we show results for the same benchmarks executing on other heterogeneous platforms, including NVIDIA GPUs, AMD GPUs, and Intel Xeon Phis. This initial evidence helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous HPC architectures.

  15. A FPGA Embedded Web Server for Remote Monitoring and Control of Smart Sensors Networks

    PubMed Central

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2014-01-01

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology. PMID:24379047

  16. A FPGA embedded web server for remote monitoring and control of smart sensors networks.

    PubMed

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2013-01-01

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology. PMID:24379047

  17. FPGA developments for the SPARTA project

    NASA Astrophysics Data System (ADS)

    Goodsell, S. J.; Fedrigo, E.; Dipper, N. A.; Donaldson, R.; Geng, D.; Myers, R. M.; Saunter, C. D.; Soenke, C.

    2005-08-01

    The European Southern Observatory (ESO) and Durham University's Centre for Advanced Instrumentation (CfAI) are currently designing a standard next generation Adaptive Optics (AO) Real-Time Control System. This platform, labelled SPARTA 'Standard Platform for Adaptive optics Real-Time Applications' will initially control the AO systems for ESO's 2nd generation VLT instruments, and will scale to implement the initial AO systems for ESO's future 100m telescope OWL. Durham's main task is to develop the Wavefront Sensor (WFS) front end and Statistical Machinery for the SPARTA platform using Field Programmable Gate Arrays (FPGA). SPARTA takes advantage of a FPGA device to alleviate the highly parallel computationally intensive tasks from the system processors, increasing the obtainable control loop frequency and reducing the computational latency in the control system. The WFS pixel stream enters a PMC hosted FPGA card contained within the SPARTA platform via optical fibres carrying the VITA 17.18/10 standard 2.5Gbps-1 serial Front Panel Data Port (sFPDP) protocol. Each FPGA board can receive a maximum of 10Gbs-1 of data via on-board optical transceivers. The FPGA device reduces WFS frames to gradient vectors before passing the data to the system processors. The FPGA allows the processors to deal with other tasks such as wavefront reconstruction, telemetry and real-time data recording, allowing for more complex adaptive control algorithms to be executed. This paper overviews the SPARTA requirements and current platform architecture, Durham's Wavefront Processor FPGA design and it concludes with a future plan of work.

  18. Maternal Vitamin D Level Is Associated with Viral Toll-Like Receptor Triggered IL-10 Response but Not the Risk of Infectious Diseases in Infancy

    PubMed Central

    Liao, Sui-Ling; Lai, Shen-Hao; Tsai, Ming-Han; Hua, Man-Chin; Yeh, Kuo-Wei; Su, Kuan-Wen; Chiang, Chi-Hsin; Huang, Shih-Yin; Kao, Chuan-Chi; Yao, Tsung-Chieh; Huang, Jing-Long

    2016-01-01

    Reports on the effect of prenatal vitamin D status on fetal immune development and infectious diseases in childhood are limited. The aim of this study was to investigate the role of maternal and cord blood vitamin D level in TLR-related innate immunity and its effect on infectious outcome. Maternal and cord blood 25 (OH)D level were examined from 372 maternal-neonatal pairs and their correlation with TLR-triggered TNF-α, IL-6, and IL-10 response at birth was assessed. Clinical outcomes related to infection at 12 months of age were also evaluated. The result showed that 75% of the pregnant mothers and 75.8% of the neonates were vitamin deficient. There was a high correlation between maternal and cord 25(OH)D levels (r = 0.67, p < 0.001). Maternal vitamin D level was inversely correlated with IL-10 response to TLR3 (p = 0.004) and TLR7-8 stimulation (p = 0.006). However, none of the TLR-triggered cytokine productions were associated with cord 25(OH)D concentration. There was no relationship between maternal and cord blood vitamin D status with infectious diseases during infancy. In conclusion, our study had shown that maternal vitamin D, but not cord vitamin D level, was associated with viral TLR-triggered IL-10 response. PMID:27298518

  19. Synthesis of blind source separation algorithms on reconfigurable FPGA platforms

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Szu, Harold H.

    2005-03-01

    Recent advances in intelligence technology have boosted the development of micro- Unmanned Air Vehicles (UAVs) including Sliver Fox, Shadow, and Scan Eagle for various surveillance and reconnaissance applications. These affordable and reusable devices have to fit a series of size, weight, and power constraints. Cameras used on such micro-UAVs are therefore mounted directly at a fixed angle without any motion-compensated gimbals. This mounting scheme has resulted in the so-called jitter effect in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum a-posteriori probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application

  20. FPGA design for constrained energy minimization

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Chang, Chein-I.; Cao, Mang

    2004-02-01

    The Constrained Energy Minimization (CEM) has been widely used for hyperspectral detection and classification. The feasibility of implementing the CEM as a real-time processing algorithm in systolic arrays has been also demonstrated. The main challenge of realizing the CEM in hardware architecture in the computation of the inverse of the data correlation matrix performed in the CEM, which requires a complete set of data samples. In order to cope with this problem, the data correlation matrix must be calculated in a causal manner which only needs data samples up to the sample at the time it is processed. This paper presents a Field Programmable Gate Arrays (FPGA) design of such a causal CEM. The main feature of the proposed FPGA design is to use the Coordinate Rotation DIgital Computer (CORDIC) algorithm that can convert a Givens rotation of a vector to a set of shift-add operations. As a result, the CORDIC algorithm can be easily implemented in hardware architecture, therefore in FPGA. Since the computation of the inverse of the data correlction involves a series of Givens rotations, the utility of the CORDIC algorithm allows the causal CEM to perform real-time processing in FPGA. In this paper, an FPGA implementation of the causal CEM will be studied and its detailed architecture will be also described.

  1. Single Event Transients in Voltage Regulators for FPGA Power Supply Applications

    NASA Technical Reports Server (NTRS)

    Poivey, Christian; Sanders, Anthony; Kim, Hak; Phan, Anthony; Forney, Jim; LaBel, Kenneth A.; Karsh, Jeremy; Pursley, Scott; Kleyner, Igor; Katz, Richard

    2006-01-01

    As with other bipolar analog devices, voltage regulators are known to be sensitive to single event transients (SET). In typical applications, large output capacitors are used to provide noise immunity. Therefore, since SET amplitude and duration are generally small, they are often of secondary importance due to this capacitance filtering. In low voltage applications, however, even small SET are a concern. Over-voltages may cause destructive conditions. Under-voltages may cause functional interrupts and may also trigger electrical latchup conditions. In addition, internal protection circuits which are affected by load as well as internal thermal effects can also be triggered from heavy ions, causing dropouts or shutdown ranging from milliseconds to seconds. In the case of FPGA power supplies applications, SETS are critical. For example, in the case of Actel FPGA RTAX family, core power supply voltage is 1.5V. Manufacturer specifies an absolute maximum rating of 1.6V and recommended operating conditions between 1.425V and 1.575V. Therefore, according to the manufacturer, any transient of amplitude greater than 75 mV can disrupt normal circuit functions, and overvoltages greater than 100 mV may damage the FPGA. We tested five low dropout voltage regulators for SET sensitivity under a large range of circuit application conditions.

  2. FPGA based Smart Wireless MIMO Control System

    NASA Astrophysics Data System (ADS)

    Usman Ali, Syed M.; Hussain, Sajid; Akber Siddiqui, Ali; Arshad, Jawad Ali; Darakhshan, Anjum

    2013-12-01

    In our present work, we have successfully designed, and developed an FPGA based smart wireless MIMO (Multiple Input & Multiple Output) system capable of controlling multiple industrial process parameters such as temperature, pressure, stress and vibration etc. To achieve this task we have used Xilin x Spartan 3E FPGA (Field Programmable Gate Array) instead of conventional microcontrollers. By employing FPGA kit to PC via RF transceivers which has a working range of about 100 meters. The developed smart system is capable of performing the control task assigned to it successfully. We have also provided a provision to our proposed system that can be accessed for monitoring and control through the web and GSM as well. Our proposed system can be equally applied to all the hazardous and rugged industrial environments where a conventional system cannot work effectively.

  3. Optoelectronic date acquisition system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Xin; Liu, Chunyang; Song, De; Tong, Zhiguo; Liu, Xiangqing

    2015-11-01

    An optoelectronic date acquisition system is designed based on FPGA. FPGA chip that is EP1C3T144C8 of Cyclone devices from Altera corporation is used as the centre of logic control, XTP2046 chip is used as A/D converter, host computer that communicates with the date acquisition system through RS-232 serial communication interface are used as display device and photo resistance is used as photo sensor. We use Verilog HDL to write logic control code about FPGA. It is proved that timing sequence is correct through the simulation of ModelSim. Test results indicate that this system meets the design requirement, has fast response and stable operation by actual hardware circuit test.

  4. Common Asthma Triggers

    MedlinePlus

    ... your bedding on the hottest water setting. Outdoor Air Pollution Outdoor air pollution can trigger an asthma attack. This pollution can ... your newspaper to plan your activities for when air pollution levels will be low. Cockroach Allergen Cockroaches and ...

  5. Compilation Techniques for Core Plus FPGA Systems

    NASA Technical Reports Server (NTRS)

    Conte, Tom

    2001-01-01

    The overall system architecture targeted in this study is a core-plus-fpga design, which is composed of a core VLIW DSP with on-chip memory and a set of special-purpose functional units implemented using FPGAs. A figure is given which shows the overall organization of the core-plus-fpga system. It is important to note that this architecture is relatively simple in concept and can be built from off-the-shelf commercial components, such as one of the Texas Instruments 320C6x family of DSPs for the core processor.

  6. The Design of an Upgrade to the Level-1 Trigger for the Endcap Muon System of the CMS Experiment

    NASA Astrophysics Data System (ADS)

    Carver, Matthew

    2014-03-01

    We present a description of a novel track finding algorithm and associated hardware to be implemented as an upgrade to the L1-Trigger of the endcap muon system of the CMS experiment at the LHC in Geneva, Switzerland. To handle the increased luminosity and pile-up expected from the LHC after the current shutdown, the algorithm uses predefined patterns to identify tracks left by muons in the detector at a rate of 40 MHz. If multiple tracks are found they are sorted on the quality of the muon, defined by the number of hit detectors and straightness of the pattern. The track finding logic is pipelined such that the trigger will operate with no deadtime and has an available latency on the order of 1 μs to make a decision. The electronics board housing this logic makes use of state-of-the-art field-programmable gate arrays and large memory lookup tables to accomplish its track finding purpose. Preliminary studies on simulated data show roughly 99.5% efficiency for both single and multiple muon tracks.

  7. GPU/MIC Acceleration of the LHC High Level Trigger to Extend the Physics Reach at the LHC

    SciTech Connect

    Halyo, Valerie; Tully, Christopher

    2015-04-14

    The quest for rare new physics phenomena leads the PI [3] to propose evaluation of coprocessors based on Graphics Processing Units (GPUs) and the Intel Many Integrated Core (MIC) architecture for integration into the trigger system at LHC. This will require development of a new massively parallel implementation of the well known Combinatorial Track Finder which uses the Kalman Filter to accelerate processing of data from the silicon pixel and microstrip detectors and reconstruct the trajectory of all charged particles down to momentums of 100 MeV. It is expected to run at least one order of magnitude faster than an equivalent algorithm on a quad core CPU for extreme pileup scenarios of 100 interactions per bunch crossing. The new tracking algorithms will be developed and optimized separately on the GPU and Intel MIC and then evaluated against each other for performance and power efficiency. The results will be used to project the cost of the proposed hardware architectures for the HLT server farm, taking into account the long term projections of the main vendors in the market (AMD, Intel, and NVIDIA) over the next 10 years. Extensive experience and familiarity of the PI with the LHC tracker and trigger requirements led to the development of a complementary tracking algorithm that is described in [arxiv: 1305.4855], [arxiv: 1309.6275] and preliminary results accepted to JINST.

  8. Experimental evidence for seismically initiated gas bubble nucleation and growth in groundwater as a mechanism for coseismic borehole water level rise and remotely triggered seismicity

    NASA Astrophysics Data System (ADS)

    Crews, Jackson B.; Cooper, Clay A.

    2014-09-01

    Changes in borehole water levels and remotely triggered seismicity occur in response to near and distant earthquakes at locations around the globe, but the mechanisms for these phenomena are not well understood. Experiments were conducted to show that seismically initiated gas bubble growth in groundwater can trigger a sustained increase in pore fluid pressure consistent in magnitude with observed coseismic borehole water level rise, constituting a physically plausible mechanism for remote triggering of secondary earthquakes through the reduction of effective stress in critically loaded geologic faults. A portion of the CO2 degassing from the Earth's crust dissolves in groundwater where seismic Rayleigh and P waves cause dilational strain, which can reduce pore fluid pressure to or below the bubble pressure, triggering CO2 gas bubble growth in the saturated zone, indicated by a spontaneous buildup of pore fluid pressure. Excess pore fluid pressure was measured in response to the application of 0.1-1.0 MPa, 0.01-0.30 Hz confining stress oscillations to a Berea sandstone core flooded with initially subsaturated aqueous CO2, under conditions representative of a confined aquifer. Confining stress oscillations equivalent to the dynamic stress of the 28 June 1992 Mw 7.3 Landers, California, earthquake Rayleigh wave as it traveled through the Long Valley caldera, and Parkfield, California, increased the pore fluid pressure in the Berea core by an average of 36 ± 15 cm and 23 ± 15 cm of equivalent freshwater head, respectively, in agreement with 41.8 cm and 34 cm rises recorded in wells at those locations.

  9. A parallel FPGA implementation for real-time 2D pixel clustering for the ATLAS Fast Tracker Processor

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C. L.; Gkaitatzis, S.; Annovi, A.; Beretta, M.; Kordas, K.; Nikolaidis, S.; Petridou, C.; Volpi, G.

    2014-10-01

    The parallel 2D pixel clustering FPGA implementation used for the input system of the ATLAS Fast TracKer (FTK) processor is presented. The input system for the FTK processor will receive data from the Pixel and micro-strip detectors from inner ATLAS read out drivers (RODs) at full rate, for total of 760Gbs, as sent by the RODs after level-1 triggers. Clustering serves two purposes, the first is to reduce the high rate of the received data before further processing, the second is to determine the cluster centroid to obtain the best spatial measurement. For the pixel detectors the clustering is implemented by using a 2D-clustering algorithm that takes advantage of a moving window technique to minimize the logic required for cluster identification. The cluster detection window size can be adjusted for optimizing the cluster identification process. Additionally, the implementation can be parallelized by instantiating multiple cores to identify different clusters independently thus exploiting more FPGA resources. This flexibility makes the implementation suitable for a variety of demanding image processing applications. The implementation is robust against bit errors in the input data stream and drops all data that cannot be identified. In the unlikely event of missing control words, the implementation will ensure stable data processing by inserting the missing control words in the data stream. The 2D pixel clustering implementation is developed and tested in both single flow and parallel versions. The first parallel version with 16 parallel cluster identification engines is presented. The input data from the RODs are received through S-Links and the processing units that follow the clustering implementation also require a single data stream, therefore data parallelizing (demultiplexing) and serializing (multiplexing) modules are introduced in order to accommodate the parallelized version and restore the data stream afterwards. The results of the first hardware tests of

  10. Testing Microshutter Arrays Using Commercial FPGA Hardware

    NASA Technical Reports Server (NTRS)

    Rapchun, David

    2008-01-01

    NASA is developing micro-shutter arrays for the Near Infrared Spectrometer (NIRSpec) instrument on the James Webb Space Telescope (JWST). These micro-shutter arrays allow NIRspec to do Multi Object Spectroscopy, a key part of the mission. Each array consists of 62414 individual 100 x 200 micron shutters. These shutters are magnetically opened and held electrostatically. Individual shutters are then programmatically closed using a simple row/column addressing technique. A common approach to provide these data/clock patterns is to use a Field Programmable Gate Array (FPGA). Such devices require complex VHSIC Hardware Description Language (VHDL) programming and custom electronic hardware. Due to JWST's rapid schedule on the development of the micro-shutters, rapid changes were required to the FPGA code to facilitate new approaches being discovered to optimize the array performance. Such rapid changes simply could not be made using conventional VHDL programming. Subsequently, National Instruments introduced an FPGA product that could be programmed through a Labview interface. Because Labview programming is considerably easier than VHDL programming, this method was adopted and brought success. The software/hardware allowed the rapid change the FPGA code and timely results of new micro-shutter array performance data. As a result, numerous labor hours and money to the project were conserved.

  11. Experiences on 64 and 150 FPGA Systems

    SciTech Connect

    Storaasli, Olaf O; Strenski, Dave

    2008-01-01

    Four FPGA systems were evaluated: the Cray XD1 system with 6 FPGAs at ORNL and Cray, the Cray XD1 system with 150 FPGAs at NRL* and the 64 FPGAs on Edinburgh s Maxwell . Their hardware and software architectures, programming tools and performance on scientific applications are discussed. FPGA speedup (over a 2.2 GHz Opteron) of 10X was typical for matrix equation solution, molecular dynamics and weather/climate codes and upto 100X for human genome DNA sequencing. Large genome comparisons requiring 12.5 years for an Opteron took less than 24 hours on NRL s Cray XD1 with 150 Virtex FPGAs for a 7,350X speedup. pipeline so each query and database character are compared in parallel, resulting in a table of scores. Genome Sequencing Results: FPGA timing results (for up to 150 FPGAs) were obtained and compared with up to 150 Opterons for sequences of varying size and complexity (e.g. 4GB openfpga.org human DNA benchmark and 155M human vs. 166M mouse DNA). 1 FPGA: Bacillus_anthracis DNA compare: Genomes

  12. FPGA Sequencer for Radar Altimeter Applications

    NASA Technical Reports Server (NTRS)

    Berkun, Andrew C.; Pollard, Brian D.; Chen, Curtis W.

    2011-01-01

    A sequencer for a radar altimeter provides accurate attitude information for a reliable soft landing of the Mars Science Laboratory (MSL). This is a field-programmable- gate-array (FPGA)-only implementation. A table loaded externally into the FPGA controls timing, processing, and decision structures. Radar is memory-less and does not use previous acquisitions to assist in the current acquisition. All cycles complete in exactly 50 milliseconds, regardless of range or whether a target was found. A RAM (random access memory) within the FPGA holds instructions for up to 15 sets. For each set, timing is run, echoes are processed, and a comparison is made. If a target is seen, more detailed processing is run on that set. If no target is seen, the next set is tried. When all sets have been run, the FPGA terminates and waits for the next 50-millisecond event. This setup simplifies testing and improves reliability. A single vertex chip does the work of an entire assembly. Output products require minor processing to become range and velocity. This technology is the heart of the Terminal Descent Sensor, which is an integral part of the Entry Decent and Landing system for MSL. In addition, it is a strong candidate for manned landings on Mars or the Moon.

  13. FPGA implementation of vision algorithms for small autonomous robots

    NASA Astrophysics Data System (ADS)

    Anderson, J. D.; Lee, D. J.; Archibald, J. K.

    2005-10-01

    The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.

  14. Calorimetry Triggering in ATLAS

    SciTech Connect

    Igonkina, O.; Achenbach, R.; Adragna, P.; Aharrouche, M.; Alexandre, G.; Andrei, V.; Anduaga, X.; Aracena, I.; Backlund, S.; Baines, J.; Barnett, B.M.; Bauss, B.; Bee, C.; Behera, P.; Bell, P.; Bendel, M.; Benslama, K.; Berry, T.; Bogaerts, A.; Bohm, C.; Bold, T.; /UC, Irvine /AGH-UST, Cracow /Birmingham U. /Barcelona, IFAE /CERN /Birmingham U. /Rutherford /Montreal U. /Santa Maria U., Valparaiso /DESY /DESY, Zeuthen /Geneva U. /City Coll., N.Y. /Barcelona, IFAE /CERN /Birmingham U. /Kirchhoff Inst. Phys. /Birmingham U. /Lisbon, LIFEP /Rio de Janeiro Federal U. /City Coll., N.Y. /Birmingham U. /Copenhagen U. /Copenhagen U. /Brookhaven /Rutherford /Royal Holloway, U. of London /Pennsylvania U. /Montreal U. /SLAC /CERN /Michigan State U. /Chile U., Catolica /City Coll., N.Y. /Oxford U. /La Plata U. /McGill U. /Mainz U., Inst. Phys. /Hamburg U. /DESY /DESY, Zeuthen /Geneva U. /Queen Mary, U. of London /CERN /Rutherford /Rio de Janeiro Federal U. /Birmingham U. /Montreal U. /CERN /Kirchhoff Inst. Phys. /Liverpool U. /Royal Holloway, U. of London /Pennsylvania U. /Kirchhoff Inst. Phys. /Geneva U. /Birmingham U. /NIKHEF, Amsterdam /Rutherford /Royal Holloway, U. of London /Rutherford /Royal Holloway, U. of London /AGH-UST, Cracow /Mainz U., Inst. Phys. /Mainz U., Inst. Phys. /Birmingham U. /Hamburg U. /DESY /DESY, Zeuthen /Geneva U. /Kirchhoff Inst. Phys. /Michigan State U. /Stockholm U. /Stockholm U. /Birmingham U. /CERN /Montreal U. /Stockholm U. /Arizona U. /Regina U. /Regina U. /Rutherford /NIKHEF, Amsterdam /Kirchhoff Inst. Phys. /DESY /DESY, Zeuthen /City Coll., N.Y. /University Coll. London /Humboldt U., Berlin /Queen Mary, U. of London /Argonne /LPSC, Grenoble /Arizona U. /Kirchhoff Inst. Phys. /Birmingham U. /Antonio Narino U. /Hamburg U. /DESY /DESY, Zeuthen /Kirchhoff Inst. Phys. /Birmingham U. /Chile U., Catolica /Indiana U. /Manchester U. /Kirchhoff Inst. Phys. /Rutherford /City Coll., N.Y. /Stockholm U. /La Plata U. /Antonio Narino U. /Queen Mary, U. of London /Kirchhoff Inst. Phys. /Antonio Narino U. /Pavia U. /City Coll., N.Y. /Mainz U., Inst. Phys. /Mainz U., Inst. Phys. /Pennsylvania U. /Barcelona, IFAE /Barcelona, IFAE /Chile U., Catolica /Genoa U. /INFN, Genoa /Rutherford /Barcelona, IFAE /Nevis Labs, Columbia U. /CERN /Antonio Narino U. /McGill U. /Rutherford /Santa Maria U., Valparaiso /Rutherford /Chile U., Catolica /Brookhaven /Oregon U. /Mainz U., Inst. Phys. /Barcelona, IFAE /McGill U. /Antonio Narino U. /Antonio Narino U. /Kirchhoff Inst. Phys. /Sydney U. /Rutherford /McGill U. /McGill U. /Pavia U. /Genoa U. /INFN, Genoa /Kirchhoff Inst. Phys. /Kirchhoff Inst. Phys. /Mainz U., Inst. Phys. /Barcelona, IFAE /SLAC /Stockholm U. /Moscow State U. /Stockholm U. /Birmingham U. /Kirchhoff Inst. Phys. /DESY /DESY, Zeuthen /Birmingham U. /Geneva U. /Oregon U. /Barcelona, IFAE /University Coll. London /Royal Holloway, U. of London /Birmingham U. /Mainz U., Inst. Phys. /Birmingham U. /Birmingham U. /Oregon U. /La Plata U. /Geneva U. /Chile U., Catolica /McGill U. /Pavia U. /Barcelona, IFAE /Regina U. /Birmingham U. /Birmingham U. /Kirchhoff Inst. Phys. /Oxford U. /CERN /Kirchhoff Inst. Phys. /UC, Irvine /UC, Irvine /Wisconsin U., Madison /Rutherford /Mainz U., Inst. Phys. /CERN /Geneva U. /Copenhagen U. /City Coll., N.Y. /Wisconsin U., Madison /Rio de Janeiro Federal U. /Wisconsin U., Madison /Stockholm U. /University Coll. London

    2011-12-08

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2/10{sup 5} to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  15. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  16. Triggering Klystrons

    SciTech Connect

    Stefan, Kelton D.; /Purdue U. /SLAC

    2010-08-25

    To determine if klystrons will perform to the specifications of the LCLS (Linac Coherent Light Source) project, a new digital trigger controller is needed for the Klystron/Microwave Department Test Laboratory. The controller needed to be programmed and Windows based user interface software needed to be written to interface with the device over a USB (Universal Serial Bus). Programming the device consisted of writing logic in VHDL (VHSIC (Very High Speed Integrated Circuits) hardware description language), and the Windows interface software was written in C++. Xilinx ISE (Integrated Software Environment) was used to compile the VHDL code and program the device, and Microsoft Visual Studio 2005 was used to compile the C++ based Windows software. The device was programmed in such a way as to easily allow read/write operations to it using a simple addressing model, and Windows software was developed to interface with the device over a USB connection. A method of setting configuration registers in the trigger device is absolutely necessary to the development of a new triggering system, and the method developed will fulfill this need adequately. More work is needed before the new trigger system is ready for use. The configuration registers in the device need to be fully integrated with the logic that will generate the RF signals, and this system will need to be tested extensively to determine if it meets the requirements for low noise trigger outputs.

  17. Trigger and Readout System for the Ashra-1 Detector

    NASA Astrophysics Data System (ADS)

    Aita, Y.; Aoki, T.; Asaoka, Y.; Morimoto, Y.; Motz, H. M.; Sasaki, M.; Abiko, C.; Kanokohata, C.; Ogawa, S.; Shibuya, H.; Takada, T.; Kimura, T.; Learned, J. G.; Matsuno, S.; Kuze, S.; Binder, P. M.; Goldman, J.; Sugiyama, N.; Watanabe, Y.

    Highly sophisticated trigger and readout system has been developed for All-sky Survey High Resolution Air-shower (Ashra) detector. Ashra-1 detector has 42 degree diameter field of view. Detection of Cherenkov and fluorescence light from large background in the large field of view requires finely segmented and high speed trigger and readout system. The system is composed of optical fiber image transmission system, 64 × 64 channel trigger sensor and FPGA based trigger logic processor. The system typically processes the image within 10 to 30 ns and opens the shutter on the fine CMOS sensor. 64 × 64 coarse split image is transferred via 64 × 64 precisely aligned optical fiber bundle to a photon sensor. Current signals from the photon sensor are discriminated by custom made trigger amplifiers. FPGA based processor processes 64 × 64 hit pattern and correspondent partial area of the fine image is acquired. Commissioning earth skimming tau neutrino observational search was carried out with this trigger system. In addition to the geometrical advantage of the Ashra observational site, the excellent tau shower axis measurement based on the fine imaging and the night sky background rejection based on the fine and fast imaging allow zero background tau shower search. Adoption of the optical fiber bundle and trigger LSI realized 4k channel trigger system cheaply. Detectability of tau shower is also confirmed by simultaneously observed Cherenkov air shower. Reduction of the trigger threshold appears to enhance the effective area especially in PeV tau neutrino energy region. New two dimensional trigger LSI was introduced and the trigger threshold was lowered. New calibration system of the trigger system was recently developed and introduced to the Ashra detector

  18. Signal-on electrochemical detection of antibiotics at zeptomole level based on target-aptamer binding triggered multiple recycling amplification.

    PubMed

    Wang, Hongzhi; Wang, Yu; Liu, Su; Yu, Jinghua; Guo, Yuna; Xu, Ying; Huang, Jiadong

    2016-06-15

    In the work, a signal-on electrochemical DNA sensor based on multiple amplification for ultrasensitive detection of antibiotics has been reported. In the presence of target, the ingeniously designed hairpin probe (HP1) is opened and the polymerase-assisted target recycling amplification is triggered, resulting in autonomous generation of secondary target. It is worth noting that the produced secondary target could not only hybridize with other HP1, but also displace the Helper from the electrode. Consequently, methylene blue labeled HP2 forms a "close" probe structure, and the increase of signal is monitored. The increasing current provides an ultrasensitive electrochemical detection for antibiotics down to 1.3 fM. To our best knowledge, such work is the first report about multiple recycling amplification combing with signal-on sensing strategy, which has been utilized for quantitative determination of antibiotics. It would be further used as a general strategy associated with more analytical techniques toward the detection of a wide spectrum of analytes. Thus, it holds great potential for the development of ultrasensitive biosensing platform for the applications in bioanalysis, disease diagnostics, and clinical biomedicine. PMID:26878484

  19. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  20. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  1. Diethyldithiocarbamate induces apoptosis in neuroblastoma cells by raising the intracellular copper level, triggering cytochrome c release and caspase activation.

    PubMed

    Matias, Andreza C; Manieri, Tânia M; Cipriano, Samantha S; Carioni, Vivian M O; Nomura, Cassiana S; Machado, Camila M L; Cerchiaro, Giselle

    2013-02-01

    Dithiocarbamates are nitrogen- and sulfur-containing compounds commonly used in pharmacology, medicine and agriculture. The molecular effects of dithiocarbamates on neuronal cell systems are not fully understood, especially in terms of their ability to accumulate copper ions inside the cell. In this work, the molecular effects of N,N-diethyldithiocarbamate (DEDTC) were studied in human SH-SY5Y neuroblastoma cells to determine the role of copper in the DEDTC toxicity and the pathway trigged in cell by the complex Cu-DEDTC. From concentration-dependent studies, we found that 5 μM of this compound induced a drastic decrease in viable cells with a concomitant accumulation in intracellular copper resulted from complexation with DEDTC, measured by atomic absorption spectroscopy. The mechanism of DEDTC-induced apoptosis in neuronal model cells is thought to occur through the death receptor signaling triggered by DEDTC-copper complex in low concentration that is associated with the activation of caspase 8. Our results indicated that the mechanism of cell death involves cytochrome c release forming the apoptosome together with Apaf-1 and caspase 9, converting the caspase 9 into its active form, allowing it to activate caspase 3 as observed by immunofluorescence. This pathway is induced by the cytotoxic effects that occur when DEDTC forms a complex with the copper ions present in the culture medium and transports them into the cell, suggesting that the DEDTC by itself was not able to cause cell death and the major effect is from its copper-complex in neuroblastoma cells. The present study suggests a role for the influence of copper by low concentrations of DEDTC in the extracellular media, the absorption and accumulation of copper in the cell and apoptotic events, induced by the cytotoxic effects that occur when DEDTC forms a complex with the copper ions. PMID:22951949

  2. Multigrid shallow water equations on an FPGA

    NASA Astrophysics Data System (ADS)

    Jeffress, Stephen; Duben, Peter; Palmer, Tim

    2015-04-01

    A novel computing technology for multigrid shallow water equations is investigated. As power consumption begins to constrain traditional supercomputing advances, weather and climate simulators are exploring alternative technologies that achieve efficiency gains through massively parallel and low power architectures. In recent years FPGA implementations of reduced complexity atmospheric models have shown accelerated speeds and reduced power consumption compared to multi-core CPU integrations. We continue this line of research by designing an FPGA dataflow engine for a mulitgrid version of the 2D shallow water equations. The multigrid algorithm couples grids of variable resolution to improve accuracy. We show that a significant reduction of precision in the floating point representation of the fine grid variables allows greater parallelism and thus improved overall peformance while maintaining accurate integrations. Preliminary designs have been constructed by software emulation. Results of the hardware implementation will be presented at the conference.

  3. Regular FPGA based on regular fabric

    NASA Astrophysics Data System (ADS)

    Xun, Chen; Jianwen, Zhu; Minxuan, Zhang

    2011-08-01

    In the sub-wavelength regime, design for manufacturability (DFM) becomes increasingly important for field programmable gate arrays (FPGAs). In this paper, an automated tile generation flow targeting micro-regular fabric is reported. Using a publicly accessible, well-documented academic FPGA as a case study, we found that compared to the tile generators previously reported, our generated micro-regular tile incurs less than 10% area overhead, which could be potentially recovered by process window optimization, thanks to its superior printability. In addition, we demonstrate that on 45 nm technology, the generated FPGA tile reduces lithography induced process variation by 33%, and reduce probability of failure by 21.2%. If a further overhead of 10% area can be recovered by enhanced resolution, we can achieve the variation reduction of 93.8% and reduce the probability of failure by 16.2%.

  4. FPGA implementation of robust Capon beamformer

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Zmuda, Henry; Li, Jian; Du, Lin; Sheplak, Mark

    2012-03-01

    The Capon Beamforming algorithm is an optimal spatial filtering algorithm used in various signal processing applications where excellent interference rejection performance is required, such as Radar and Sonar systems, Smart Antenna systems for wireless communications. Its lack of robustness, however, means that it is vulnerable to array calibration errors and other model errors. To overcome this problem, numerous robust Capon Beamforming algorithms have been proposed, which are much more promising for practical applications. In this paper, an FPGA implementation of a robust Capon Beamforming algorithm is investigated and presented. This realization takes an array output with 4 channels, computes the complex-valued adaptive weight vectors for beamforming with an 18 bit fixed-point representation and runs at a 100 MHz clock on Xilinx V4 FPGA. This work will be applied in our medical imaging project for breast cancer detection.

  5. FPGA Based Reconfigurable ATM Switch Test Bed

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.; Jones, Robert E.

    1998-01-01

    Various issues associated with "FPGA Based Reconfigurable ATM Switch Test Bed" are presented in viewgraph form. Specific topics include: 1) Network performance evaluation; 2) traditional approaches; 3) software simulation; 4) hardware emulation; 5) test bed highlights; 6) design environment; 7) test bed architecture; 8) abstract sheared-memory switch; 9) detailed switch diagram; 10) traffic generator; 11) data collection circuit and user interface; 12) initial results; and 13) the following conclusions: Advances in FPGA make hardware emulation feasible for performance evaluation, hardware emulation can provide several orders of magnitude speed-up over software simulation; due to the complexity of hardware synthesis process, development in emulation is much more difficult than simulation and requires knowledge in both networks and digital design.

  6. 3D FFTs on a Single FPGA

    PubMed Central

    Humphries, Benjamin; Zhang, Hansen; Sheng, Jiayi; Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    The 3D FFT is critical in many physical simulations and image processing applications. On FPGAs, however, the 3D FFT was thought to be inefficient relative to other methods such as convolution-based implementations of multi-grid. We find the opposite: a simple design, operating at a conservative frequency, takes 4μs for 163, 21μs for 323, and 215μs for 643 single precision data points. The first two of these compare favorably with the 25μs and 29μs obtained running on a current Nvidia GPU. Some broader significance is that this is a critical piece in implementing a large scale FPGA-based MD engine: even a single FPGA is capable of keeping the FFT off of the critical path for a large fraction of possible MD simulations. PMID:26594666

  7. FPGA Flash Memory High Speed Data Acquisition

    NASA Technical Reports Server (NTRS)

    Gonzalez, April

    2013-01-01

    The purpose of this research is to design and implement a VHDL ONFI Controller module for a Modular Instrumentation System. The goal of the Modular Instrumentation System will be to have a low power device that will store data and send the data at a low speed to a processor. The benefit of such a system will give an advantage over other purchased binary IP due to the capability of allowing NASA to re-use and modify the memory controller module. To accomplish the performance criteria of a low power system, an in house auxiliary board (Flash/ADC board), FPGA development kit, debug board, and modular instrumentation board will be jointly used for the data acquisition. The Flash/ADC board contains four, 1 MSPS, input channel signals and an Open NAND Flash memory module with an analog to digital converter. The ADC, data bits, and control line signals from the board are sent to an Microsemi/Actel FPGA development kit for VHDL programming of the flash memory WRITE, READ, READ STATUS, ERASE, and RESET operation waveforms using Libero software. The debug board will be used for verification of the analog input signal and be able to communicate via serial interface with the module instrumentation. The scope of the new controller module was to find and develop an ONFI controller with the debug board layout designed and completed for manufacture. Successful flash memory operation waveform test routines were completed, simulated, and tested to work on the FPGA board. Through connection of the Flash/ADC board with the FPGA, it was found that the device specifications were not being meet with Vdd reaching half of its voltage. Further testing showed that it was the manufactured Flash/ADC board that contained a misalignment with the ONFI memory module traces. The errors proved to be too great to fix in the time limit set for the project.

  8. TOT measurement implemented in FPGA TDC

    NASA Astrophysics Data System (ADS)

    Fan, Huan-Huan; Cao, Ping; Liu, Shu-Bin; An, Qi

    2015-11-01

    Time measurement plays a crucial role for the purpose of particle identification in high energy physics experiments. With increasingly demanding physics goals and the development of electronics, modern time measurement systems need to meet the requirement of excellent resolution specification as well as high integrity. Based on Field Programmable Gate Arrays (FPGAs), FPGA time-to-digital converters (TDCs) have become one of the most mature and prominent time measurement methods in recent years. For correcting the time-walk effect caused by leading timing, a time-over-threshold (TOT) measurement should be added to the FPGA TDC. TOT can be obtained by measuring the interval between the signal leading and trailing edges. Unfortunately, a traditional TDC can recognize only one kind of signal edge, the leading or the trailing. Generally, to measure the interval, two TDC channels need to be used at the same time, one for leading, the other for trailing. However, this method unavoidably increases the amount of FPGA resources used and reduces the TDC's integrity. This paper presents one method of TOT measurement implemented in a Xilinx Virtex-5 FPGA. In this method, TOT measurement can be achieved using only one TDC input channel. The consumed resources and time resolution can both be guaranteed. Testing shows that this TDC can achieve resolution better than 15ps for leading edge measurement and 37 ps for TOT measurement. Furthermore, the TDC measurement dead time is about two clock cycles, which makes it good for applications with higher physics event rates. Supported by National Natural Science Foundation of China (11079003, 10979003)

  9. FPGA Implementation of Heart Rate Monitoring System.

    PubMed

    Panigrahy, D; Rakshit, M; Sahu, P K

    2016-03-01

    This paper describes a field programmable gate array (FPGA) implementation of a system that calculates the heart rate from Electrocardiogram (ECG) signal. After heart rate calculation, tachycardia, bradycardia or normal heart rate can easily be detected. ECG is a diagnosis tool routinely used to access the electrical activities and muscular function of the heart. Heart rate is calculated by detecting the R peaks from the ECG signal. To provide a portable and the continuous heart rate monitoring system for patients using ECG, needs a dedicated hardware. FPGA provides easy testability, allows faster implementation and verification option for implementing a new design. We have proposed a five-stage based methodology by using basic VHDL blocks like addition, multiplication and data conversion (real to the fixed point and vice-versa). Our proposed heart rate calculation (R-peak detection) method has been validated, using 48 first channel ECG records of the MIT-BIH arrhythmia database. It shows an accuracy of 99.84%, the sensitivity of 99.94% and the positive predictive value of 99.89%. Our proposed method outperforms other well-known methods in case of pathological ECG signals and successfully implemented in FPGA. PMID:26643079

  10. Implementing a Digital Phasemeter in an FPGA

    NASA Technical Reports Server (NTRS)

    Rao, Shanti R.

    2008-01-01

    Firmware for implementing a digital phasemeter within a field-programmable gate array (FPGA) has been devised. In the original application of this firmware, the phase that one seeks to measure is the difference between the phases of two nominally-equal-frequency heterodyne signals generated by two interferometers. In that application, zero-crossing detectors convert the heterodyne signals to trains of rectangular pulses, the two pulse trains are fed to a fringe counter (the major part of the phasemeter) controlled by a clock signal having a frequency greater than the heterodyne frequency, and the fringe counter computes a time-averaged estimate of the difference between the phases of the two pulse trains. The firmware also does the following: Causes the FPGA to compute the frequencies of the input signals; Causes the FPGA to implement an Ethernet (or equivalent) transmitter for readout of phase and frequency values; and Provides data for use in diagnosis of communication failures. The readout rate can be set, by programming, to a value between 250 Hz and 1 kHz. Network addresses can be programmed by the user.

  11. EXPERIENCE WITH FPGA-BASED PROCESSOR CORE AS FRONT-END COMPUTER.

    SciTech Connect

    HOFF, L.T.

    2005-10-10

    The RHIC control system architecture follows the familiar ''standard model''. LINUX workstations are used as operator consoles. Front-end computers are distributed around the accelerator, close to equipment being controlled or monitored. These computers are generally based on VMEbus CPU modules running the VxWorks operating system. I/O is typically performed via the VMEbus, or via PMC daughter cards (via an internal PCI bus), or via on-board I/O interfaces (Ethernet or serial). Advances in FPGA size and sophistication now permit running virtual processor ''cores'' within the FPGA logic, including ''cores'' with advanced features such as memory management. Such systems offer certain advantages over traditional VMEbus Front-end computers. Advantages include tighter coupling with FPGA logic, and therefore higher I/O bandwidth, and flexibility in packaging, possibly resulting in a lower noise environment and/or lower cost. This paper presents the experience acquired while porting the RHIC control system to a PowerPC 405 core within a Xilinx FPGA for use in low-level RF control.

  12. Ground Level Observations of a Possible Downward-Beamed TGF during a Rocket-Triggered Lightning Flash at Camp Blanding, Florida in August 2014

    NASA Astrophysics Data System (ADS)

    Bozarth, A.; Dwyer, J. R.; Cramer, E. S.; Rassoul, H.; Uman, M. A.; Jordan, D.; Grove, J. E.

    2015-12-01

    Ground level high-energy observations of an August 2014 rocket-triggered lightning event at the International Center for Lightning Research and Testing (ICLRT) in Camp Blanding, Florida show a 180 µs burst of multiple-MeV photons during the latter part of the Upward Positive Leader (UPL) phase of an altitude-triggered lightning flash, following the first, truncated return stroke. The timing and waveform profile being atypical from x-ray emissions from lightning leaders, our observations suggest the occurrence of a downward beamed terrestrial gamma ray flash (TGF). Instrumentation operating during this event include a set of 16 NaI(TI)/PMT detectors plus 7 1-m2 plastic scintillation detectors spread across the 1 km2 facility, with 38 additional Na(TI)/PMT detectors located inside the 1"-thick Pb-shielded x-ray camera and an x-ray spectrometer. Comparing the location and energy data from these detectors to Monte Carlo simulations of TGFs from the REAM code developed by Dwyer [2003], our analysis investigates possible TGF production regions and determines the likelihood of the observed high-energy emissions being produced by a TGF inside the thunderstorm.

  13. FPGA-Based Digital Current Switching Power Amplifiers Used in Magnetic Bearing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Zhang, Kai; Dong, Jinping

    For a traditional two-level current switching power amplifier (PA) used in a magnetic bearing system, its current ripple is obvious. To increase its current ripple performance, three-level amplifiers are designed and their current control is generally based on analog and logical circuits. So the required hardware is complex and a performance increase from the hardware adjustment is difficult. To solve this problem, a FPGA-based digital current switching power amplifier (DCSPA) was designed. Its current ripple was obviously smaller than a two-level amplifier and its control circuit was much simpler than a tri-level amplifier with an analog control circuit. Because of the field-programmable capability of a FPGA chip used, different control algorithms including complex nonlinear algorithms could be easily implemented in the amplifier and their effects could be compared with the same hardware.

  14. Embedded algorithms within an FPGA-based system to process nonlinear time series data

    NASA Astrophysics Data System (ADS)

    Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.

    2008-03-01

    This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better

  15. Increased levels of reduced cytochrome b and mitophagy components are required to trigger nonspecific autophagy following induced mitochondrial dysfunction.

    PubMed

    Deffieu, Maika; Bhatia-Kiššová, Ingrid; Salin, Bénédicte; Klionsky, Daniel J; Pinson, Benoît; Manon, Stéphen; Camougrand, Nadine

    2013-01-15

    Mitochondria are essential organelles producing most of the energy required for the cell. A selective autophagic process called mitophagy removes damaged mitochondria, which is critical for proper cellular homeostasis; dysfunctional mitochondria can generate excess reactive oxygen species that can further damage the organelle as well as other cellular components. Although proper cell physiology requires the maintenance of a healthy pool of mitochondria, little is known about the mechanism underlying the recognition and selection of damaged organelles. In this study, we investigated the cellular fate of mitochondria damaged by the action of respiratory inhibitors (antimycin A, myxothiazol, KCN) that act on mitochondrial respiratory complexes III and IV, but have different effects with regard to the production of reactive oxygen species and increased levels of reduced cytochromes. Antimycin A and potassium cyanide effectively induced nonspecific autophagy, but not mitophagy, in a wild-type strain of Saccharomyces cerevisiae; however, low or no autophagic activity was measured in strains deficient for genes that encode proteins involved in mitophagy, including ATG32, ATG11 and BCK1. These results provide evidence for a major role of specific mitophagy factors in the control of a general autophagic cellular response induced by mitochondrial alteration. Moreover, increased levels of reduced cytochrome b, one of the components of the respiratory chain, could be the first signal of this induction pathway. PMID:23230142

  16. Pharmacological Levels of Withaferin A (Withania somnifera) Trigger Clinically Relevant Anticancer Effects Specific to Triple Negative Breast Cancer Cells

    PubMed Central

    Szarc vel Szic, Katarzyna; Op de Beeck, Ken; Ratman, Dariusz; Wouters, An; Beck, Ilse M.; Declerck, Ken; Heyninck, Karen; Fransen, Erik; Bracke, Marc; De Bosscher, Karolien; Lardon, Filip; Van Camp, Guy; Berghe, Wim Vanden

    2014-01-01

    Withaferin A (WA) isolated from Withania somnifera (Ashwagandha) has recently become an attractive phytochemical under investigation in various preclinical studies for treatment of different cancer types. In the present study, a comparative pathway-based transcriptome analysis was applied in epithelial-like MCF-7 and triple negative mesenchymal MDA-MB-231 breast cancer cells exposed to different concentrations of WA which can be detected systemically in in vivo experiments. Whereas WA treatment demonstrated attenuation of multiple cancer hallmarks, the withanolide analogue Withanone (WN) did not exert any of the described effects at comparable concentrations. Pathway enrichment analysis revealed that WA targets specific cancer processes related to cell death, cell cycle and proliferation, which could be functionally validated by flow cytometry and real-time cell proliferation assays. WA also strongly decreased MDA-MB-231 invasion as determined by single-cell collagen invasion assay. This was further supported by decreased gene expression of extracellular matrix-degrading proteases (uPA, PLAT, ADAM8), cell adhesion molecules (integrins, laminins), pro-inflammatory mediators of the metastasis-promoting tumor microenvironment (TNFSF12, IL6, ANGPTL2, CSF1R) and concomitant increased expression of the validated breast cancer metastasis suppressor gene (BRMS1). In line with the transcriptional changes, nanomolar concentrations of WA significantly decreased protein levels and corresponding activity of uPA in MDA-MB-231 cell supernatant, further supporting its anti-metastatic properties. Finally, hierarchical clustering analysis of 84 chromatin writer-reader-eraser enzymes revealed that WA treatment of invasive mesenchymal MDA-MB-231 cells reprogrammed their transcription levels more similarly towards the pattern observed in non-invasive MCF-7 cells. In conclusion, taking into account that sub-cytotoxic concentrations of WA target multiple metastatic effectors in therapy

  17. Pharmacological levels of Withaferin A (Withania somnifera) trigger clinically relevant anticancer effects specific to triple negative breast cancer cells.

    PubMed

    Szarc vel Szic, Katarzyna; Op de Beeck, Ken; Ratman, Dariusz; Wouters, An; Beck, Ilse M; Declerck, Ken; Heyninck, Karen; Fransen, Erik; Bracke, Marc; De Bosscher, Karolien; Lardon, Filip; Van Camp, Guy; Vanden Berghe, Wim

    2014-01-01

    Withaferin A (WA) isolated from Withania somnifera (Ashwagandha) has recently become an attractive phytochemical under investigation in various preclinical studies for treatment of different cancer types. In the present study, a comparative pathway-based transcriptome analysis was applied in epithelial-like MCF-7 and triple negative mesenchymal MDA-MB-231 breast cancer cells exposed to different concentrations of WA which can be detected systemically in in vivo experiments. Whereas WA treatment demonstrated attenuation of multiple cancer hallmarks, the withanolide analogue Withanone (WN) did not exert any of the described effects at comparable concentrations. Pathway enrichment analysis revealed that WA targets specific cancer processes related to cell death, cell cycle and proliferation, which could be functionally validated by flow cytometry and real-time cell proliferation assays. WA also strongly decreased MDA-MB-231 invasion as determined by single-cell collagen invasion assay. This was further supported by decreased gene expression of extracellular matrix-degrading proteases (uPA, PLAT, ADAM8), cell adhesion molecules (integrins, laminins), pro-inflammatory mediators of the metastasis-promoting tumor microenvironment (TNFSF12, IL6, ANGPTL2, CSF1R) and concomitant increased expression of the validated breast cancer metastasis suppressor gene (BRMS1). In line with the transcriptional changes, nanomolar concentrations of WA significantly decreased protein levels and corresponding activity of uPA in MDA-MB-231 cell supernatant, further supporting its anti-metastatic properties. Finally, hierarchical clustering analysis of 84 chromatin writer-reader-eraser enzymes revealed that WA treatment of invasive mesenchymal MDA-MB-231 cells reprogrammed their transcription levels more similarly towards the pattern observed in non-invasive MCF-7 cells. In conclusion, taking into account that sub-cytotoxic concentrations of WA target multiple metastatic effectors in therapy

  18. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    NASA Astrophysics Data System (ADS)

    Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.

    2006-10-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.

  19. A new Pulse-Pattern Generator based on LabVIEW FPGA

    NASA Astrophysics Data System (ADS)

    Ziegler, F.; Beck, D.; Brand, H.; Hahn, H.; Marx, G.; Schweikhard, L.

    2012-07-01

    For the control of experimental sequences composed of triggers, gates and delays a Pulse-Pattern Generator (PPG) has been developed based on a Field Programmable Gate Array (FPGA) addressed in a LabVIEW environment. It allows a highly reproducible timing of measurement procedures by up to 64 individual channels with pulse and delay periods from the nanoseconds to the minutes range. The PPG has been implemented in the context of the development of a new control system for the ClusterTrap setup, an ion storage device for atomic-cluster research, in close contact with the SHIPTRAP and ISOLTRAP collaborations at GSI and CERN, respectively. As the new PPG is not ion-trap specific it can be employed in any experiment based on sequences of triggers, pulses and delays.

  20. FPGA ROM Code for Very Large FIFO Control

    Energy Science and Technology Software Center (ESTSC)

    1995-02-22

    The code is used to program a Field Programmable Gate Array (FPGA) controls a 4 megabit FIFO so that a set delay from input to output is maintained. The FPGA is also capable of inserting errors into the data flow in a controlled manner.

  1. FPGA development for high altitude subsonic parachute testing

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.; Gromov, Konstantin G.; Konefat, Edward H.

    2005-01-01

    This paper describes a rapid, top down requirements-driven design of a Field Programmable Gate Array (FPGA) used in an Earth qualification test program for a new Mars subsonic parachute. The FPGA is used to process and control storage of telemetry data from multiple sensors throughout launch, ascent, deployment and descent phases of the subsonic parachute test.

  2. FPGA development for high altitude subsonic parachute testing

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.; Konefat, Edward H.; Gromovt, Konstantin

    2005-01-01

    This paper describes a rapid, top down requirements-driven design of an FPGA used in an Earth qualification test program for a new Mars subsonic parachute. The FPGA is used to process and store data from multiple sensors at multiple rates during launch, ascent, deployment and descent phases of the subsonic parachute test.

  3. XDELAY. FPGA ROM Code for Very Large FIFO Control

    SciTech Connect

    Pratt, T.J.

    1994-01-01

    The code is used to program a Field Programmable Gate Array (FPGA) controls a 4 megabit FIFO so that a set delay from input to output is maintained. The FPGA is also capable of inserting errors into the data flow in a controlled manner.

  4. Algorithm and implementation of muon trigger and data transmission system for barrel-endcap overlap region of the CMS detector

    NASA Astrophysics Data System (ADS)

    Zabolotny, W. M.; Byszuk, A.

    2016-03-01

    The CMS experiment Level-1 trigger system is undergoing an upgrade. In the barrel-endcap transition region, it is necessary to merge data from 3 types of muon detectors—RPC, DT and CSC. The Overlap Muon Track Finder (OMTF) uses the novel approach to concentrate and process those data in a uniform manner to identify muons and their transversal momentum. The paper presents the algorithm and FPGA firmware implementation of the OMTF and its data transmission system in CMS. It is foreseen that the OMTF will be subject to significant changes resulting from optimization which will be done with the aid of physics simulations. Therefore, a special, high-level, parameterized HDL implementation is necessary.

  5. Seismically Initiated Carbon Dioxide Gas Bubble Growth in Groundwater: A Mechanism for Co-seismic Borehole Water Level Rise and Remotely Triggered Secondary Seismicity

    NASA Astrophysics Data System (ADS)

    Crews, Jackson B.

    of freshwater. Co-seismic borehole water level increases of the same magnitude were observed in Parkfield, California, and Long Valley caldera, California, in response to the propagation of a Rayleigh wave in the same amplitude and frequency range produced by the June 28, 1992 MW 7.3 Landers, California, earthquake. Co-seismic borehole water level rise is well documented in the literature, but the mechanism is not well understood, and the results of core-scale experiments indicate that seismically initiated CO2 gas bubble nucleation and growth in groundwater is a reasonable mechanism. Remotely triggered secondary seismicity is also well documented, and the reduction of effective stress due to CO2 bubble nucleation and growth in critically loaded faults may potentially explain how, for example, the June 28, 1992 MW 7.3 Landers, California, earthquake triggered seismicity as far away as Yellowstone, Wyoming, 1250 km from the hypocenter. A numerical simulation was conducted using Euler's method and a first-order kinetic model to compute the pore fluid pressure response to confining stress excursions on a Berea sandstone core flooded with initially under-saturated aqueous CO2. The model was calibrated on the pore pressure response to a rapid drop and later recovery of the confining stress. The model predicted decreasing overpressure as the confining stress oscillation frequency increased from 0.05 Hz to 0.30 Hz, in contradiction with the experimental results and field observations, which exhibit larger excess pore fluid pressure in response to higher frequency oscillations. The limitations of the numerical model point to the important influence of non-ideal behavior arising from a discontinuous gas phase and complex dynamics at the gas-liquid interface.

  6. Explicit Design of FPGA-Based Coprocessors for Short-Range Force Computations in Molecular Dynamics Simulations *†

    PubMed Central

    Gu, Yongfeng; VanCourt, Tom; Herbordt, Martin C.

    2008-01-01

    FPGA-based acceleration of molecular dynamics simulations (MD) has been the subject of several recent studies. The short-range force computation, which dominates the execution time, is the primary focus. Here we combine: a high level of FPGA-specific design including cell lists, systematically determined interpolation and precision, handling of exclusion, and support for MD simulations of up to 256K particles. The target system consists of a standard PC with a 2004-era COTS FPGA board. There are several innovations: new microarchitectures for several major components, including the cell list processor and the off-chip memory controller; and a novel arithmetic mode. Extensive experimentation was required to optimize precision, interpolation order, interpolation mode, table sizes, and simulation quality. We obtain a substantial speed-up over a highly tuned production MD code. PMID:19412319

  7. Study on FPGA SEU Mitigation for the Readout Electronics of DAMPE BGO Calorimeter in Space

    NASA Astrophysics Data System (ADS)

    Shen, Zhongtao; Feng, Changqing; Gao, Shanshan; Zhang, Deliang; Jiang, Di; Liu, Shubin; An, Qi

    2015-06-01

    The BGO calorimeter, which provides a wide measurement range of the primary cosmic ray spectrum, is a key sub-detector of Dark Matter Particle Explorer (DAMPE). The readout electronics of calorimeter consists of 16 pieces of Actel ProASIC Plus FLASH-based FPGA, of which the design-level flip-flops and embedded block RAMs are single event upset (SEU) sensitive in the harsh space environment. Therefore to comply with radiation hardness assurance (RHA), SEU mitigation methods, including partial triple modular redundancy (TMR), CRC checksum, and multi-domain reset are analyzed and tested by the heavy-ion beam test. Composed of multi-level redundancy, a FPGA design with the characteristics of SEU tolerance and low resource consumption is implemented for the readout electronics.

  8. Firearm trigger assembly

    DOEpatents

    Crandall, David L.; Watson, Richard W.

    2010-02-16

    A firearm trigger assembly for use with a firearm includes a trigger mounted to a forestock of the firearm so that the trigger is movable between a rest position and a triggering position by a forwardly placed support hand of a user. An elongated trigger member operatively associated with the trigger operates a sear assembly of the firearm when the trigger is moved to the triggering position. An action release assembly operatively associated with the firearm trigger assembly and a movable assembly of the firearm prevents the trigger from being moved to the triggering position when the movable assembly is not in the locked position.

  9. Analog and digital FPGA implementation of BRIN for optimization problems.

    PubMed

    Ng, H S; Lam, K P

    2003-01-01

    The binary relation inference network (BRIN) shows promise in obtaining the global optimal solution for optimization problem, which is time independent of the problem size. However, the realization of this method is dependent on the implementation platforms. We studied analog and digital FPGA implementation platforms. Analog implementation of BRIN for two different directed graph problems is studied. As transitive closure problems can transform to a special case of shortest path problems or a special case of maximum spanning tree problems, two different forms of BRIN are discussed. Their circuits using common analog integrated circuits are investigated. The BRIN solution for critical path problems is expressed and is implemented using the separated building block circuit and the combined building block circuit. As these circuits are different, the response time of these networks will be different. The advancement of field programmable gate arrays (FPGAs) in recent years, allowing millions of gates on a single chip and accompanying with high-level design tools, has allowed the implementation of very complex networks. With this exemption on manual circuit construction and availability of efficient design platform, the BRIN architecture could be built in a much more efficient way. Problems on bandwidth are removed by taking all previous external connections to the inside of the chip. By transforming BRIN to FPGA (Xilinx XC4010XL and XCV800 Virtex), we implement a synchronous network with computations in a finite number of steps. Two case studies are presented, with correct results verified from simulation implementation. Resource consumption on FPGAs is studied showing that Virtex devices are more suitable for the expansion of network in future developments. PMID:18244587

  10. Do Drug Treatment Facilities Increase Clients’ Exposure to Potential Neighborhood-Level Triggers for Relapse? A Small-Area Assessment of a Large, Public Treatment System

    PubMed Central

    2006-01-01

    Research on drug treatment facility locations has focused narrowly on the issue of geographic proximity to clients. We argue that neighborhood conditions should also enter into the facility location decision and illustrate a formal assessment of neighborhood conditions at facilities in a large, metropolitan area, taking into account conditions clients already face at home. We discuss choice and construction of small-area measures relevant to the drug treatment context, including drug activity, disadvantage, and violence as well as statistical comparisons of clients’ home and treatment locations with respect to these measures. Analysis of 22,707 clients discharged from 494 community-based outpatient and residential treatment facilities that received public funds during 1998–2000 in Los Angeles County revealed no significant mean differences between home and treatment neighborhoods. However, up to 20% of clients are exposed to markedly higher levels of disadvantage, violence, or drug activity where they attend treatment than where they live, suggesting that it is not uncommon for treatment locations to increase clients’ exposure to potential environmental triggers for relapse. Whereas on average both home and treatment locations exhibit higher levels of these measures than the household locations of the general population, substantial variability in public treatment clients’ home neighborhoods calls into question the notion that they hail exclusively from poor, high drug activity areas. Shortcomings of measures available for neighborhood assessment of treatment locations and implications of the findings for other areas of treatment research are also discussed. PMID:16736365

  11. CORDIC algorithms for SVM FPGA implementation

    NASA Astrophysics Data System (ADS)

    Gimeno Sarciada, Jesús; Lamel Rivera, Horacio; Jiménez, Matías

    2010-04-01

    Support Vector Machines are currently one of the best classification algorithms used in a wide number of applications. The ability to extract a classification function from a limited number of learning examples keeping in the structural risk low has demonstrated to be a clear alternative to other neural networks. However, the calculations involved in computing the kernel and the repetition of the process for all support vectors in the classification problem are certainly intensive, requiring time or power consumption in order to function correctly. This problem could be a drawback in certain applications with limited resources or time. Therefore simple algorithms circumventing this problem are needed. In this paper we analyze an FPGA implementation of a SVM which uses a CORDIC algorithm for simplifying the calculation of as specific kernel greatly reducing the time and hardware requirements needed for the classification, allowing for powerful in-field portable applications. The algorithm is and its calculation capabilities are shown. The full SVM classifier using this algorithm is implemented in an FPGA and its in-field use assessed for high speed low power classification.

  12. FPNA: interaction between FPGA and neural computation.

    PubMed

    Girau, B

    2000-06-01

    Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graph of standard neural models can not be directly handled by digital hardware devices. More particularly, several works show that programmable digital hardware is a real opportunity for flexible hardware implementations of neural networks. And yet many area and topology problems arise when standard neural models are implemented onto programmable circuits such as FPGAs, so that the fast FPGA technology improvements can not be fully exploited. Therefore neural network hardware implementations need to reconcile simple hardware topologies with complex neural architectures. The theoretical and practical framework developed, allows this combination thanks to some principles of configurable hardware that are applied to neural computation: Field Programmable Neural Arrays (FPNA) lead to powerful neural architectures that are easy to map onto FPGAs, thanks to a simplified topology and an original data exchange scheme. This paper shows how FPGAs have led to the definition of the FPNA computation paradigm. Then it shows how FPNAs contribute to current and future FPGA-based neural implementations by solving the general problems that are raised by the implementation of complex neural networks onto FPGAs. PMID:11011795

  13. Time Triggered Protocol (TTP) for Integrated Modular Avionics

    NASA Technical Reports Server (NTRS)

    Motzet, Guenter; Gwaltney, David A.; Bauer, Guenther; Jakovljevic, Mirko; Gagea, Leonard

    2006-01-01

    Traditional avionics computing systems are federated, with each system provided on a number of dedicated hardware units. Federated applications are physically separated from one another and analysis of the systems is undertaken individually. Integrated Modular Avionics (IMA) takes these federated functions and integrates them on a common computing platform in a tightly deterministic distributed real-time network of computing modules in which the different applications can run. IMA supports different levels of criticality in the same computing resource and provides a platform for implementation of fault tolerance through hardware and application redundancy. Modular implementation has distinct benefits in design, testing and system maintainability. This paper covers the requirements for fault tolerant bus systems used to provide reliable communication between IMA computing modules. An overview of the Time Triggered Protocol (TTP) specification and implementation as a reliable solution for IMA systems is presented. Application examples in aircraft avionics and a development system for future space application are covered. The commercially available TTP controller can be also be implemented in an FPGA and the results from implementation studies are covered. Finally future direction for the application of TTP and related development activities are presented.

  14. The NA62 trigger system

    NASA Astrophysics Data System (ADS)

    Krivda, M.; NA62 Collaboration

    2013-08-01

    The main aim of the NA62 experiment (NA62 Technical Design Report, [1]) is to study ultra-rare Kaon decays. In order to select rare events over the overwhelming background, central systems with high-performance, high bandwidth, flexibility and configurability are necessary, that minimize dead time while maximizing data collection reliability. The NA62 experiment consists of 12 sub-detector systems and several trigger and control systems, for a total channel count of less than 100,000. The GigaTracKer (GTK) has the largest number of channels (54,000), and the Liquid Krypton (LKr) calorimeter shares with it the largest raw data rate (19 GB/s). The NA62 trigger system works with 3 trigger levels. The first trigger level is based on a hardware central trigger unit, so-called L0 Trigger Processor (L0TP), and Local Trigger Units (LTU), which are all located in the experimental cavern. Other two trigger levels are based on software, and done with a computer farm located on surface. The L0TP receives information from triggering sub-detectors asynchronously via Ethernet; it processes the information, and then transmits a final trigger decision synchronously to each sub-detector through the Trigger and Timing Control (TTC) system. The interface between L0TP and the TTC system, which is used for trigger and clock distribution, is provided by the Local Trigger Unit board (LTU). The LTU can work in two modes: global and stand-alone. In the global mode, the LTU provides an interface between L0TP and TTC system. In the stand-alone mode, the LTU can fully emulate L0TP and so provides an independent way for each sub-detector for testing or calibration purposes. In addition to the emulation functionality, a further functionality is implemented that allows to synchronize the clock of the LTU with the L0TP and the TTC system. For testing and debugging purposes, a Snap Shot Memory (SSM) interface is implemented, that can work

  15. Integration design of FPGA software for a miniaturizing CCD remote sensing camera

    NASA Astrophysics Data System (ADS)

    Yin, Na; Li, Qiang; Rong, Peng; Lei, Ning; Wan, Min

    2014-09-01

    Video signal processor (VSP) is an important part for CCD remote sensing cameras, and also is the key part of light miniaturization design for cameras. We need to apply FPGAs to improve the level of integration for simplifying the video signal processor circuit. This paper introduces an integration design of FPGA software for video signal processor in a certain space remote sensing camera in detail. This design has accomplished the functions of integration in CCD timing control, integral time control, CCD data formatting and CCD image processing and correction on one single FPGA chip, which resolved the problem for miniaturization of video signal processor in remote sensing cameras. Currently, this camera has already launched successfully and obtained high quality remote sensing images, which made contribution to the miniaturizing remote sensing camera.

  16. The GBT-FPGA core: features and challenges

    NASA Astrophysics Data System (ADS)

    Barros Marin, M.; Baron, S.; Feger, S. S.; Leitao, P.; Lupu, E. S.; Soos, C.; Vichoudis, P.; Wyllie, K.

    2015-03-01

    Initiated in 2009 to emulate the GBTX (Gigabit Transceiver) serial link and test the first GBTX prototypes, the GBT-FPGA project is now a full library, targeting FPGAs (Field Programmable Gate Array) from Altera and Xilinx, allowing the implementation of one or several GBT links of two different types: "Standard" or "Latency-Optimized". The first major version of this IP Core was released in April 2014. This paper presents the various flavours of the GBT-FPGA kit and focuses on the challenge of providing a fixed and deterministic latency system both for clock and data recovery for all FPGA families.

  17. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  18. Modulation of TGFbeta 2 levels by lamin A in U2-OS osteoblast-like cells: understanding the osteolytic process triggered by altered lamins

    PubMed Central

    Evangelisti, Camilla; Bernasconi, Pia; Cavalcante, Paola; Cappelletti, Cristina; D'Apice, Maria Rosaria; Sbraccia, Paolo; Novelli, Giuseppe; Prencipe, Sabino; Lemma, Silvia; Baldini, Nicola; Avnet, Sofia; Squarzoni, Stefano; Martelli, Alberto M.; Lattanzi, Giovanna

    2015-01-01

    Transforming growth factor beta (TGFbeta) plays an essential role in bone homeostasis and deregulation of TGFbeta occurs in bone pathologies. Patients affected by Mandibuloacral Dysplasia (MADA), a progeroid disease linked to LMNA mutations, suffer from an osteolytic process. Our previous work showed that MADA osteoblasts secrete excess amount of TGFbeta 2, which in turn elicits differentiation of human blood precursors into osteoclasts. Here, we sought to determine how altered lamin A affects TGFbeta signaling. Our results show that wild-type lamin A negatively modulates TGFbeta 2 levels in osteoblast-like U2-OS cells, while the R527H mutated prelamin A as well as farnesylated prelamin A do not, ultimately leading to increased secretion of TGFbeta 2. TGFbeta 2 in turn, triggers the Akt/mTOR pathway and upregulates osteoprotegerin and cathepsin K. TGFbeta 2 neutralization rescues Akt/mTOR activation and the downstream transcriptional effects, an effect also obtained by statins or RAD001 treatment. Our results unravel an unexpected role of lamin A in TGFbeta 2 regulation and indicate rapamycin analogs and neutralizing antibodies to TGFbeta 2 as new potential therapeutic tools for MADA. PMID:25823658

  19. Development of an FPGA-based multipoint laser pyroshock measurement system for explosive bolts.

    PubMed

    Abbas, Syed Haider; Jang, Jae-Kyeong; Lee, Jung-Ryul; Kim, Zaeill

    2016-07-01

    Pyroshock can cause failure to the objective of an aerospace structure by damaging its sensitive electronic equipment, which is responsible for performing decisive operations. A pyroshock is the high intensity shock wave that is generated when a pyrotechnic device is explosively triggered to separate, release, or activate structural subsystems of an aerospace architecture. Pyroshock measurement plays an important role in experimental simulations to understand the characteristics of pyroshock on the host structure. This paper presents a technology to measure a pyroshock wave at multiple points using laser Doppler vibrometers (LDVs). These LDVs detect the pyroshock wave generated due to an explosive-based pyrotechnical event. Field programmable gate array (FPGA) based data acquisition is used in the study to acquire pyroshock signals simultaneously from multiple channels. This paper describes the complete system design for multipoint pyroshock measurement. The firmware architecture for the implementation of multichannel data acquisition on an FPGA-based development board is also discussed. An experiment using explosive bolts was configured to test the reliability of the system. Pyroshock was generated using explosive excitation on a 22-mm-thick steel plate. Three LDVs were deployed to capture the pyroshock wave at different points. The pyroshocks captured were displayed as acceleration plots. The results showed that our system effectively captured the pyroshock wave with a peak-to-peak magnitude of 303 741 g. The contribution of this paper is a specialized architecture of firmware design programmed in FPGA for data acquisition of large amount of multichannel pyroshock data. The advantages of the developed system are the near-field, multipoint, non-contact, and remote measurement of a pyroshock wave, which is dangerous and expensive to produce in aerospace pyrotechnic tests. PMID:27475551

  20. Development of an FPGA-based multipoint laser pyroshock measurement system for explosive bolts

    NASA Astrophysics Data System (ADS)

    Abbas, Syed Haider; Jang, Jae-Kyeong; Lee, Jung-Ryul; Kim, Zaeill

    2016-07-01

    Pyroshock can cause failure to the objective of an aerospace structure by damaging its sensitive electronic equipment, which is responsible for performing decisive operations. A pyroshock is the high intensity shock wave that is generated when a pyrotechnic device is explosively triggered to separate, release, or activate structural subsystems of an aerospace architecture. Pyroshock measurement plays an important role in experimental simulations to understand the characteristics of pyroshock on the host structure. This paper presents a technology to measure a pyroshock wave at multiple points using laser Doppler vibrometers (LDVs). These LDVs detect the pyroshock wave generated due to an explosive-based pyrotechnical event. Field programmable gate array (FPGA) based data acquisition is used in the study to acquire pyroshock signals simultaneously from multiple channels. This paper describes the complete system design for multipoint pyroshock measurement. The firmware architecture for the implementation of multichannel data acquisition on an FPGA-based development board is also discussed. An experiment using explosive bolts was configured to test the reliability of the system. Pyroshock was generated using explosive excitation on a 22-mm-thick steel plate. Three LDVs were deployed to capture the pyroshock wave at different points. The pyroshocks captured were displayed as acceleration plots. The results showed that our system effectively captured the pyroshock wave with a peak-to-peak magnitude of 303 741 g. The contribution of this paper is a specialized architecture of firmware design programmed in FPGA for data acquisition of large amount of multichannel pyroshock data. The advantages of the developed system are the near-field, multipoint, non-contact, and remote measurement of a pyroshock wave, which is dangerous and expensive to produce in aerospace pyrotechnic tests.

  1. Multi-variants synthesis of Petri nets for FPGA devices

    NASA Astrophysics Data System (ADS)

    Bukowiec, Arkadiusz; Doligalski, Michał

    2015-09-01

    There is presented new method of synthesis of application specific logic controllers for FPGA devices. The specification of control algorithm is made with use of control interpreted Petri net (PT type). It allows specifying parallel processes in easy way. The Petri net is decomposed into state-machine type subnets. In this case, each subnet represents one parallel process. For this purpose there are applied algorithms of coloring of Petri nets. There are presented two approaches of such decomposition: with doublers of macroplaces or with one global wait place. Next, subnets are implemented into two-level logic circuit of the controller. The levels of logic circuit are obtained as a result of its architectural decomposition. The first level combinational circuit is responsible for generation of next places and second level decoder is responsible for generation output symbols. There are worked out two variants of such circuits: with one shared operational memory or with many flexible distributed memories as a decoder. Variants of Petri net decomposition and structures of logic circuits can be combined together without any restrictions. It leads to existence of four variants of multi-variants synthesis.

  2. Stego on FPGA: an IWT approach.

    PubMed

    Ramalingam, Balakrishnan; Amirtharajan, Rengarajan; Rayappan, John Bosco Balaguru

    2014-01-01

    A reconfigurable hardware architecture for the implementation of integer wavelet transform (IWT) based adaptive random image steganography algorithm is proposed. The Haar-IWT was used to separate the subbands namely, LL, LH, HL, and HH, from 8 × 8 pixel blocks and the encrypted secret data is hidden in the LH, HL, and HH blocks using Moore and Hilbert space filling curve (SFC) scan patterns. Either Moore or Hilbert SFC was chosen for hiding the encrypted data in LH, HL, and HH coefficients, whichever produces the lowest mean square error (MSE) and the highest peak signal-to-noise ratio (PSNR). The fixated random walk's verdict of all blocks is registered which is nothing but the furtive key. Our system took 1.6 µs for embedding the data in coefficient blocks and consumed 34% of the logic elements, 22% of the dedicated logic register, and 2% of the embedded multiplier on Cyclone II field programmable gate array (FPGA). PMID:24723794

  3. Wire Position Monitoring with FPGA based Electronics

    SciTech Connect

    Eddy, N.; Lysenko, O.; /Fermilab

    2009-01-01

    This fall the first Tesla-style cryomodule cooldown test is being performed at Fermilab. Instrumentation department is preparing the electronics to handle the data from a set of wire position monitors (WPMs). For simulation purposes a prototype pipe with a WMP has been developed and built. The system is based on the measurement of signals induced in pickups by 320 MHz signal carried by a wire through the WPM. The wire is stretched along the pipe with a tensioning load of 9.07 kg. The WPM consists of four 50 {Omega} striplines spaced 90{sup o} apart. FPGA based digitizer scans the WPM and transmits the data to a PC via VME interface. The data acquisition is based on the PC running LabView. In order to increase the accuracy and convenience of the measurements some modifications were required. The first is implementation of an average and decimation filter algorithm in the integrator operation in the FPGA. The second is the development of alternative tool for WPM measurements in the PC. The paper describes how these modifications were performed and test results of a new design. The last cryomodule generation has a single chain of seven WPMs (placed in critical positions: at each end, at the three posts and between the posts) to monitor a cold mass displacement during cooldown. The system was developed in Italy in collaboration with DESY. Similar developments have taken place at Fermilab in the frame of cryomodules construction for SCRF research. This fall preliminary cryomodule cooldown test is being performed. In order to prepare an appropriate electronic system for the test a prototype pipe with a WMP has been developed and built, figure 1. The system is based on the measurement of signals induced in pickups by 320 MHz signal carried by a wire through the WPM. The 0.5 mm diameter Cu wire is stretched along the pipe with a tensioning load of 9.07 kg and has a length of 1.1 m. The WPM consists of four 50 {Omega} striplines spaced 90{sup o} apart. An FPGA based

  4. FPGA implementation of VXIbus interface hardware.

    PubMed

    Mehta, K; Rajesh, V A; Veeraswamy, S

    1993-01-01

    The HP E1399A development card is a B-size, register based device that can be used to simplify the development of simple, custom VXIbus instruments. The E1399A provides interface logic that buffers a 16-bit bidirectional data bus and performs other functions required by the VXIbus standard. However, the amount of interface logic required is high enough to substantially reduce the breadboard area that is available to the user. This paper reports on evaluation of field programmable gate array (FPGA) technology to the implementation of the VXIbus interface circuitry. Using FPGAs (Xilinx), all the logic of the E1399A can be fit into at most two low cost gate array packages with an attendant savings in board space. This results in a reliable design that provides the interface between the VXIbus and the user's custom circuitry. PMID:8329634

  5. Martian dust devils detector over FPGA

    NASA Astrophysics Data System (ADS)

    de Lucas, E.; Miguel, M. J.; Mozos, D.; Vázquez, L.

    2012-04-01

    Digital applications that must be on-board space missions must comply with a very restrictive set of requirements. These include energy efficiency, small volume and weight, robustness and high performance. Moreover, these circuits cannot be repaired in case of error, so they must be reliable or provide some way to recover from errors. These features make reconfigurable hardware (FPGAs, Field Programmable Gate Arrays) a very suitable technology to be used in space missions. This paper presents a Martian dust devil detector implemented on an FPGA. The results show that a hardware implementation of the algorithm presents very good numbers in terms of performance compared with the software version. Moreover, as the amount of time needed to perform all the computations on the reconfigurable hardware is small, this hardware can be used most of the time to realize other applications.

  6. Martian dust devils detector over FPGA

    NASA Astrophysics Data System (ADS)

    de Lucas, E.; Miguel, M. J.; Mozos, D.; Vázquez, L.

    2011-12-01

    Digital applications that must be on-board of space missions must accomplish a very restrictive set of requirements. These include energy efficiency, small volume and weight, robustness and high performance. Moreover these circuits can not be repaired in case of error, so they must be reliable or provide some way to recover from errors. These features make reconfigurable hardware (FPGAs, Field Programmable Gate Arrays) a very suitable technology to be used in space missions. This paper presents a Martian dust devil detector implemented on a FPGA. The results show that a hardware implementation of the algorithm present very good numbers in terms of performance compared with the software version. Moreover, as the amount of time needed to perform all the computations on the reconfigurable hardware is small, this hardware can be used more of the time to realize other applications.

  7. FPGA for Power Control of MSL Avionics

    NASA Technical Reports Server (NTRS)

    Wang, Duo; Burke, Gary R.

    2011-01-01

    A PLGT FPGA (Field Programmable Gate Array) is included in the LCC (Load Control Card), GID (Guidance Interface & Drivers), TMC (Telemetry Multiplexer Card), and PFC (Pyro Firing Card) boards of the Mars Science Laboratory (MSL) spacecraft. (PLGT stands for PFC, LCC, GID, and TMC.) It provides the interface between the backside bus and the power drivers on these boards. The LCC drives power switches to switch power loads, and also relays. The GID drives the thrusters and latch valves, as well as having the star-tracker and Sun-sensor interface. The PFC drives pyros, and the TMC receives digital and analog telemetry. The FPGA is implemented both in Xilinx (Spartan 3- 400) and in Actel (RTSX72SU, ASX72S). The Xilinx Spartan 3 part is used for the breadboard, the Actel ASX part is used for the EM (Engineer Module), and the pin-compatible, radiation-hardened RTSX part is used for final EM and flight. The MSL spacecraft uses a FC (Flight Computer) to control power loads, relays, thrusters, latch valves, Sun-sensor, and star-tracker, and to read telemetry such as temperature. Commands are sent over a 1553 bus to the MREU (Multi-Mission System Architecture Platform Remote Engineering Unit). The MREU resends over a remote serial command bus c-bus to the LCC, GID TMC, and PFC. The MREU also sends out telemetry addresses via a remote serial telemetry address bus to the LCC, GID, TMC, and PFC, and the status is returned over the remote serial telemetry data bus.

  8. FPGA-accelerated adaptive optics wavefront control

    NASA Astrophysics Data System (ADS)

    Mauch, S.; Reger, J.; Reinlein, C.; Appelfelder, M.; Goy, M.; Beckert, E.; Tünnermann, A.

    2014-03-01

    The speed of real-time adaptive optical systems is primarily restricted by the data processing hardware and computational aspects. Furthermore, the application of mirror layouts with increasing numbers of actuators reduces the bandwidth (speed) of the system and, thus, the number of applicable control algorithms. This burden turns out a key-impediment for deformable mirrors with continuous mirror surface and highly coupled actuator influence functions. In this regard, specialized hardware is necessary for high performance real-time control applications. Our approach to overcome this challenge is an adaptive optics system based on a Shack-Hartmann wavefront sensor (SHWFS) with a CameraLink interface. The data processing is based on a high performance Intel Core i7 Quadcore hard real-time Linux system. Employing a Xilinx Kintex-7 FPGA, an own developed PCie card is outlined in order to accelerate the analysis of a Shack-Hartmann Wavefront Sensor. A recently developed real-time capable spot detection algorithm evaluates the wavefront. The main features of the presented system are the reduction of latency and the acceleration of computation For example, matrix multiplications which in general are of complexity O(n3 are accelerated by using the DSP48 slices of the field-programmable gate array (FPGA) as well as a novel hardware implementation of the SHWFS algorithm. Further benefits are the Streaming SIMD Extensions (SSE) which intensively use the parallelization capability of the processor for further reducing the latency and increasing the bandwidth of the closed-loop. Due to this approach, up to 64 actuators of a deformable mirror can be handled and controlled without noticeable restriction from computational burdens.

  9. A frame-based domain-specific language for rapid prototyping of FPGA-based software-defined radios

    NASA Astrophysics Data System (ADS)

    Ouedraogo, Ganda Stephane; Gautier, Matthieu; Sentieys, Olivier

    2014-12-01

    The field-programmable gate array (FPGA) technology is expected to play a key role in the development of software-defined radio (SDR) platforms. As this technology evolves, low-level designing methods for prototyping FPGA-based applications did not change throughout the decades. In the outstanding context of SDR, it is important to rapidly implement new waveforms to fulfill such a stringent flexibility paradigm. At the current time, different proposals have defined, through software-based approaches, some efficient methods to prototype SDR waveforms in a processor-based running environment. This paper describes a novel design flow for FPGA-based SDR applications. This flow relies upon high-level synthesis (HLS) principles and leverages the nascent HLS tools. Its entry point is a domain-specific language (DSL) which handles the complexity of programming an FPGA and integrates some SDR features so as to enable automatic waveform control generation from a data frame model. Two waveforms (IEEE 802.15.4 and IEEE 802.11a) have been designed and explored via this new methodology, and the results are highlighted in this paper.

  10. Design and simulation of an FPGA-based printed wiring assembly

    SciTech Connect

    Eilers, D.L.

    1993-12-31

    Past generations of electronic products have been constructed using relatively few (often just one) field programmable gate arrays (FGPA) or Application Specific Integrated Circuits (ASIC) surrounded by a collection of medium to large scale integration parts. Today, the new generations of electronic products are becoming increasingly complex. The specification, design, and simulation of this new generation of FPGA and ASIC based products places additional demands on computer-aided engineering (CAE) systems. FPGA and ASIC devices offer both high pin count and high internal logic density. Both of these features serve to increase the density and functionality of the products in which they are used; however, these features also detract from the ability to debug the final hardware with conventional techniques. Fine pitch parts with high pin counts present a great challenge to probing. The simulations done on individual designs address many of these concerns; however, when FPGA`s and/or ASIC`s make up a significant portion of the electronics assembly or when the interfaces between them are complicated, product level simulation becomes very important. This paper will describe the electronic product realization process that has evolved in Department 2335 at Sandia National Laboratories. Department 2335 is a hardware development group which works to support various system development departments. The customers for these electronics products are a group of system design and integration engineers who architect and implement the final system. The following phases of the design process are described in terms of an FPGA based product design; however, they are generally applicable to all types of electronic designs. This paper contains the bulk of the details of the design process which was utilized to develop the latest generation of electronic products.

  11. Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems

    NASA Technical Reports Server (NTRS)

    Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.

    2012-01-01

    Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.

  12. General Structure Design for Fast Image Processing Algorithms Based upon FPGA DSP Slice

    NASA Astrophysics Data System (ADS)

    Wasfy, Wael; Zheng, Hong

    Increasing the speed and accuracy for a fast image processing algorithms during computing the image intensity for low level 3x3 algorithms with different kernel but having the same parallel calculation method is our target to achieve in this paper. FPGA is one of the fastest embedded systems that can be used for implementing the fast image processing image algorithms by using DSP slice module inside the FPGA we aimed to get the advantage of the DSP slice as a faster, accurate, higher number of bits in calculations and different calculated equation maneuver capabilities. Using a higher number of bits during algorithm calculations will lead to a higher accuracy compared with using the same image algorithm calculations with less number of bits, also reducing FPGA resources as minimum as we can and according to algorithm calculations needs is a very important goal to achieve. So in the recommended design we used as minimum DSP slice as we can and as a benefit of using DSP slice is higher calculations accuracy as the DSP capabilities of having 48 bit accuracy in addition and 18 x 18 bit accuracy in multiplication. For proofing the design, Gaussian filter and Sobelx edge detector image processing algorithms have been chosen to be implemented. Also we made a comparison with another design for proofing the improvements of the accuracy and speed of calculations, the other design as will be mentioned later on this paper is using maximum 12 bit accuracy in adding or multiplying calculations.

  13. Optically triggered infrared photodetector.

    PubMed

    Ramiro, Íñigo; Martí, Antonio; Antolín, Elisa; López, Esther; Datas, Alejandro; Luque, Antonio; Ripalda, José M; González, Yolanda

    2015-01-14

    We demonstrate a new class of semiconductor device: the optically triggered infrared photodetector (OTIP). This photodetector is based on a new physical principle that allows the detection of infrared light to be switched ON and OFF by means of an external light. Our experimental device, fabricated using InAs/AlGaAs quantum-dot technology, demonstrates normal incidence infrared detection in the 2-6 μm range. The detection is optically triggered by a 590 nm light-emitting diode. Furthermore, the detection gain is achieved in our device without an increase of the noise level. The novel characteristics of OTIPs open up new possibilities for third generation infrared imaging systems ( Rogalski, A.; Antoszewski, J.; Faraone, L. J. Appl. Phys. 2009, 105 (9), 091101). PMID:25490236

  14. L0 Trigger for the EMCal Detector of the ALICE Experiment

    SciTech Connect

    Kral, Jiri; Awes, Terry C; Muller, Hans; Rak, Jan; Schambach, Joachim

    2012-01-01

    The ALICE experiment at the CERN Large Hadron Collider (LHC) accelerator was designed to study ultra-relativistic heavy-ion collisions. The ALICE Electromagnetic Calorimeter (EMCal) was built to provide measurement of photons, electrons, and jets, and trigger selection of hard-QCD events containing them. The EMCal single-shower L0 trigger, which triggers on large energy deposit within a 4 x 4 tower sliding window, became operational in 2010. The implementation of the real-time FPGA based algorithm optimized to provide a fast L0 decision is presented.

  15. Triggering for charm, beauty, and truth

    SciTech Connect

    Appel, J.A.

    1982-02-01

    As the search for more and more rare processes accelerates, the need for more and more effective event triggers also accelerates. In the earliest experiments, a simple coincidence often sufficed not only as the event trigger, but as the complete record of an event of interest. In today's experiments, not only has the fast trigger become more sophisticated, but one or more additional level of trigger processing precedes writing event data to magnetic tape for later analysis. Further search experiments will certainly require further expansion in the number of trigger levels required to filter those rare events of particular interest.

  16. Boosted object hardware trigger development and testing for the Phase I upgrade of the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Stark, Giordon; Atlas Collaboration

    2015-04-01

    The Global Feature Extraction (gFEX) module is a Level 1 jet trigger system planned for installation in ATLAS during the Phase 1 upgrade in 2018. The gFEX selects large-radius jets for capturing Lorentz-boosted objects by means of wide-area jet algorithms refined by subjet information. The architecture of the gFEX permits event-by-event local pile-up suppression for these jets using the same subtraction techniques developed for offline analyses. The gFEX architecture is also suitable for other global event algorithms such as missing transverse energy (MET), centrality for heavy ion collisions, and ``jets without jets.'' The gFEX will use 4 processor FPGAs to perform calculations on the incoming data and a Hybrid APU-FPGA for slow control of the module. The gFEX is unique in both design and implementation and substantially enhance the selectivity of the L1 trigger and increases sensitivity to key physics channels.

  17. A novel calorimeter trigger concept: The jet trigger of the H1 experiment at HERA

    NASA Astrophysics Data System (ADS)

    Olivier, Bob; Dubak-Behrendt, Ana; Kiesling, Christian; Reisert, Burkard; Aktas, Adil; Antunovic, Biljana; Bracinik, Juraj; Braquet, Charles; Brettel, Horst; Dulny, Barbara; Fent, Jürgen; Fras, Markus; Fröchtenicht, Walter; Haberer, Werner; Hoffmann, Dirk; Modjesch, Miriam; Placakyte, Ringaile; Schörner-Sadenius, Thomas; Wassatsch, Andreas; Zimmermann, Jens

    2011-06-01

    We report on a novel trigger for the liquid argon calorimeter which was installed in the H1 Experiment at HERA. This trigger, called the "Jet Trigger", was running at level 1 and implemented a real-time cluster algorithm. Within only 800 ns, the Jet Trigger algorithm found local energy maxima in the calorimeter, summed their immediate neighbors, sorted the resulting jets by energy, and applied topological conditions for the final level 1 trigger decision. The Jet Trigger was in operation from the year 2006 until the end of the HERA running in the summer of 2007. With the Jet Trigger it was possible to substantially reduce the thresholds for triggering on electrons and jets, giving access to a largely extended phase space for physical observables which could not have been reached in H1 before. The concepts of the Jet Trigger may be an interesting upgrade option for the LHC experiments.

  18. FPGA-based signal processing for the LHCb silicon strip detectors

    NASA Astrophysics Data System (ADS)

    Haefeli, G.; Bay, A.; Gong, A.

    2006-12-01

    We have developed an electronic board (TELL1) to interface the DAQ system of the LHCb experiment at CERN. Two hundred and eighty-nine TELL1 boards are needed to read out the different subdetectors including the silicon VEertex LOcator (VELO) (172 k strips), the Trigger Tracker (TT) (147 k strips) and the Inner Tracker (129 k strips). Each board can handle either 64 analog or 24 digital optical links. The TELL1 mother board provides common mode correction, zero suppression, data formatting, and a large network interface buffer. To satisfy the different requirements we have adopted a flexible FPGA design and made use of mezzanine cards. Mezzanines are used for data input from digital optical and analog copper links as well as for the Gigabit Ethernet interface to DAQ.

  19. The trigger system for the external target experiment in the HIRFL cooling storage ring

    NASA Astrophysics Data System (ADS)

    Li, Min; Zhao, Lei; Liu, Jin-Xin; Lu, Yi-Ming; Liu, Shu-Bin; An, Qi

    2016-08-01

    A trigger system was designed for the external target experiment in the Cooling Storage Ring (CSR) of the Heavy Ion Research Facility in Lanzhou (HIRFL). Considering that different detectors are scattered over a large area, the trigger system is designed based on a master-slave structure and fiber-based serial data transmission technique. The trigger logic is organized in hierarchies, and flexible reconfiguration of the trigger function is achieved based on command register access or overall field-programmable gate array (FPGA) logic on-line reconfiguration controlled by remote computers. We also conducted tests to confirm the function of the trigger electronics, and the results indicate that this trigger system works well. Supported by the National Natural Science Foundation of China (11079003), the Knowledge Innovation Program of the Chinese Academy of Sciences (KJCX2-YW-N27), and the CAS Center for Excellence in Particle Physics (CCEPP).

  20. Fast semivariogram computation using FPGA architectures

    NASA Astrophysics Data System (ADS)

    Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang

    2015-02-01

    The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments

  1. FPGA-Based, Self-Checking, Fault-Tolerant Computers

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2004-01-01

    A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing

  2. SEU mitigation strategies for SRAM-based FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Pei; Zhang, Jian

    2011-08-01

    The type of Field Programmable Gate Arrays (FPGAs) technology and device family used in a design is a key factor for system reliability. Though antifuse-based FPGAs are widely used in aerospace because of their high reliability, current antifuse-based FPGA devices are expensive and leave no room for mistakes or changes since they are not reprogrammable. The substitute for antifuse-based FPGAs are needed in aerospace design, they should be both reprogrammable and highly reliable to Single Event Upset effects (SEUs). SRAM-based FPGAs are widely and systematically used in complex embedding digital systems both in a single chip industry and commercial applications. They are reprogrammable and high in density because of the smaller SRAM cells and logic structures. But the SRAM-based FPGAs are especially sensitive to cosmic radiation because the configuration information is stored in SRAM memory. The ideal FPGA for aerospace use should be high-density SRAM-based which is also insensitive to cosmic radiation induced SEUs. Therefore, in order to enable the use of SRAM-based FPGAs in safety critical applications, new techniques and strategies are essential to mitigate the SEU errors in such devices. In order to improve the reliability of SRAM-based FPGAs which are very sensitive to SEU errors, techniques such as reconfiguration and Triple Module Redundancy (TMR) are widely used in the aerospace electronic systems to mitigate the SEU and Single Event Functional Interrupt (SEFI) errors. Compared to reconfiguration and triplication, scrubbing and partial reconfiguration will utilize fewer or even no internal resources of FPGA. What's more, the detection and repair process can detect and correct SEU errors in configuration memories of the FPGA without affecting or interrupting the proper working of the system while reconfiguration would terminate the operation of the FPGA. This paper presents a payload system realized on Xilinx Virtex-4 FPGA which mitigates SEU effects in the

  3. Energy efficiency analysis and implementation of AES on an FPGA

    NASA Astrophysics Data System (ADS)

    Kenney, David

    The Advanced Encryption Standard (AES) was developed by Joan Daemen and Vincent Rjimen and endorsed by the National Institute of Standards and Technology in 2001. It was designed to replace the aging Data Encryption Standard (DES) and be useful for a wide range of applications with varying throughput, area, power dissipation and energy consumption requirements. Field Programmable Gate Arrays (FPGAs) are flexible and reconfigurable integrated circuits that are useful for many different applications including the implementation of AES. Though they are highly flexible, FPGAs are often less efficient than Application Specific Integrated Circuits (ASICs); they tend to operate slower, take up more space and dissipate more power. There have been many FPGA AES implementations that focus on obtaining high throughput or low area usage, but very little research done in the area of low power or energy efficient FPGA based AES; in fact, it is rare for estimates on power dissipation to be made at all. This thesis presents a methodology to evaluate the energy efficiency of FPGA based AES designs and proposes a novel FPGA AES implementation which is highly flexible and energy efficient. The proposed methodology is implemented as part of a novel scripting tool, the AES Energy Analyzer, which is able to fully characterize the power dissipation and energy efficiency of FPGA based AES designs. Additionally, this thesis introduces a new FPGA power reduction technique called Opportunistic Combinational Operand Gating (OCOG) which is used in the proposed energy efficient implementation. The AES Energy Analyzer was able to estimate the power dissipation and energy efficiency of the proposed AES design during its most commonly performed operations. It was found that the proposed implementation consumes less energy per operation than any previous FPGA based AES implementations that included power estimations. Finally, the use of Opportunistic Combinational Operand Gating on an AES cipher

  4. Rapid Onboard Data Product Generation with Multicore Processors and FPGA

    NASA Astrophysics Data System (ADS)

    Mandl, D.; Sohlberg, R. A.; Cappelaere, P. G.; Frye, S. W.; Ly, V.; Handy, M.; Ambrosia, V. G.; Sullivan, D. V.; Bland, G.; Pastor, E.; Crago, S.; Flatley, C.; Shah, N.; Bronston, J.; Creech, T.

    2012-12-01

    The Intelligent Payload Module (IPM) is an experimental testbed with multicore processors and Field Programmable Gate Array (FPGA). This effort is being funded by the NASA Earth Science Technology Office as part of an Advanced Information Systems Technology (AIST) 2011 research grant to investigate the use of high performance onboard processing to create an onboard data processing pipeline that can rapidly process a subset of onboard imaging spectrometer data (1) through radiance to reflectance conversion (2) atmospheric correction (3) geolocation and co-registration and (4) level 2 data product generation. The requirements are driven by the mission concept for the HyspIRI NASA Decadal mission, although other NASA Decadal missions could use the same concept. The system is being set up to make use of the same ground and flight software being used by other satellites at NASA/GSFC. Furthermore, a Web Coverage Processing Service (WCPS) is installed as part of the flight software which enables a user on the ground to specify the desired algorithm to run onboard against the data in realtime. Benchmark demonstrations are being run and will be run through the three year effort on various platforms including a helicopter and various airplane platforms with various instruments to demonstrate various configurations that would be compatible with the HyspIRI mission and other similar missions. This presentation will lay out the demonstrations conducted to date along with any benchmark performance metrics and future demonstration efforts and objectives.Initial IPM Test Box

  5. REALIZATION OF A CUSTOM DESIGNED FPGA BASED EMBEDDED CONTROLLER.

    SciTech Connect

    SEVERINO,F.; HARVEY, M.; HAYES, T.; HOFF, L.; ODDO, P.; SMITH, K.S.

    2007-10-15

    As part of the Low Level RF (LLRF) upgrade project at Brookhaven National Laboratory's Collider-Accelerator Department (BNL C-AD), we have recently developed and tested a prototype high performance embedded controller. This controller is a custom designed PMC module employing a Xilinx V4FX60 FPGA with a PowerPC405 embedded processor, and a wide variety of on board peripherals (DDR2 SDRAM, FLASH, Ethernet, PCI, multi-gigabit serial transceivers, etc.). The controller is capable of running either an embedded version of LINUX or VxWorks, the standard operating system for RHIC front end computers (FECs). We have successfully demonstrated functionality of this controller as a standard RHIC FEC and tested all on board peripherals. We now have the ability to develop complex, custom digital controllers within the framework of the standard RHIC control system infrastructure. This paper will describe various aspects of this development effort, including the basic hardware, functional capabilities, the development environment, kernel and system integration, and plans for further development.

  6. SRAM Based Re-programmable FPGA for Space Applications

    NASA Technical Reports Server (NTRS)

    Wang, J. J.; Sun, J. S.; Cronquist, B. E.; McCollum, J. L.; Speers, T. M.; Plants, W. C.; Katz, R. B.

    1999-01-01

    An SRAM (static random access memory)-based reprogrammable FPGA (field programmable gate array) is investigated for space applications. A new commercial prototype, named the RS family, was used as an example for the investigation. The device is fabricated in a 0.25 micrometers CMOS technology. Its architecture is reviewed to provide a better understanding of the impact of single event upset (SEU) on the device during operation. The SEU effect of different memories available on the device is evaluated. Heavy ion test data and SPICE simulations are used integrally to extract the threshold LET (linear energy transfer). Together with the saturation cross-section measurement from the layout, a rate prediction is done on each memory type. The SEU in the configuration SRAM is identified as the dominant failure mode and is discussed in detail. The single event transient error in combinational logic is also investigated and simulated by SPICE. SEU mitigation by hardening the memories and employing EDAC (error detection and correction) at the device level are presented. For the configuration SRAM (CSRAM) cell, the trade-off between resistor de-coupling and redundancy hardening techniques are investigated with interesting results. Preliminary heavy ion test data show no sign of SEL (single event latch-up). With regard to ionizing radiation effects, the increase in static leakage current (static I(sub CC)) measured indicates a device tolerance of approximately 50krad(Si).

  7. FPGA architecture for a videowall image processor

    NASA Astrophysics Data System (ADS)

    Skarabot, Alessandro; Ramponi, Giovanni; Buriola, Luigi

    2001-05-01

    This paper proposes an FPGA architecture for a videowall image processor. To create a videowall, a set of high resolution displays is arranged in order to present a single large image or smaller multiple images. An image processor is needed to perform the appropriate format conversion corresponding to the required output configuration, and to properly enhance the image contrast. Input signals either in the interlaced or in the progressive format must be managed. The image processor we propose is integrated into two different blocks: the first one implements the deinterlacing task for a YCbCr input video signal, then it converts the progressive YCbCr to the RGB data format and performs the optional contrast enhancement; the other one performs the format conversion of the RGB data format. Motion-adaptive vertico-temporal deinterlacing is used for the luminance signal Y; the color difference signals Cb and Cr instead are processed by means of line average deinterlacing. Image contrast enhancement is achieved via a modified Unsharp Masking technique and involves only the luminance Y. The format conversion algorithm is the bilinear interpolation technique employing the Warped Distance approach and is performed on the RGB data. Two different subblocks have been considered in the system architecture since the interpolation is performed column-wise and successively row- wise.

  8. Stego on FPGA: An IWT Approach

    PubMed Central

    Ramalingam, Balakrishnan

    2014-01-01

    A reconfigurable hardware architecture for the implementation of integer wavelet transform (IWT) based adaptive random image steganography algorithm is proposed. The Haar-IWT was used to separate the subbands namely, LL, LH, HL, and HH, from 8 × 8 pixel blocks and the encrypted secret data is hidden in the LH, HL, and HH blocks using Moore and Hilbert space filling curve (SFC) scan patterns. Either Moore or Hilbert SFC was chosen for hiding the encrypted data in LH, HL, and HH coefficients, whichever produces the lowest mean square error (MSE) and the highest peak signal-to-noise ratio (PSNR). The fixated random walk's verdict of all blocks is registered which is nothing but the furtive key. Our system took 1.6 µs for embedding the data in coefficient blocks and consumed 34% of the logic elements, 22% of the dedicated logic register, and 2% of the embedded multiplier on Cyclone II field programmable gate array (FPGA). PMID:24723794

  9. Radiation tests of CMS RPC muon trigger electronic components

    NASA Astrophysics Data System (ADS)

    Buńkowski, Karol; Kassamakov, Ivan; Królikowski, Jan; Kierzkowski, Krzysztof; Kudła, Maciej; Maenpaa, Teppo; Poźniak, Krzysztof; Rybka, Dominik; Tuominen, Eija; Ungaro, Donatella; Wrochna, Grzegorz; Zabołotny, Wojciech

    2005-02-01

    The results of proton irradiation test of electronic devices, selected for the RPC trigger electronic system of the CMS detector, will be presented. For Xilinx Spartan-IIE FPGA the cross-section for Single Event Upsets (SEUs) in configuration bits was measured. The dynamic SEUs in flip-flops were also investigated, but not observed. For the FLASH memories no single upsets were detected. Only after irradiating with a huge dose permanent damages of devices were observed. For Synchronous Dynamic Random Access Memory (SDRAM), the SEU cross-section was measured.

  10. Performance Evaluation of FPGA-Based Biological Applications

    SciTech Connect

    Storaasli, Olaf O; Yu, Weikuan; Strenski, Dave; Maltby, Jim

    2007-01-01

    On the forefront of recent HPC innovations are Field Programmable Gate Arrays (FPGA), which promise to accelerate calculations by one or more orders of magnitude. The performance of two Cray XD1 systems with Virtex-II Pro 50 and Virtex-4 LX160 FPGAs, were evaluated using a computational biological human genome comparisons program. This paper describes scalable, parallel, FPGA-accelerated results for the FASTA application ssearch34, using the Smith-Waterman algorithm for DNA, RNA and protein sequencing contained in the OpenFPGA benchmark suite. Results indicate typical Cray XD1 FPGA speedups of 50x (Virtex-II Pro 50) and 100x (Virtex-4 LX160) compared to a 2.2 GHz Opteron. Similar speedups are expected for the DRC RPU110-L200 modules (Virtex-4 LX200), which fit in an Opteron socket, and selected by Cray for its XT Supercomputers. The FPGA programming challenges, human genome benchmarking, and data verification of results, are discussed.