Sample records for camac daq imprecise

  1. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yasu, Y.; Fujii, H.; Nomachi, M.

    1994-12-31

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers.

  2. Data Acquisition Software for Experiments at the MAMI-C Tagged Photon Facility

    NASA Astrophysics Data System (ADS)

    Oussena, Baya; Annand, John

    2013-10-01

    Tagged-photon experiments at Mainz use the electron beam of the MAMI (Mainzer MIcrotron) accelerator, in combination with the Glasgow Tagged Photon Spectrometer. The AcquDAQ DAQ system is implemented in the C + + language and makes use of CERN ROOT software libraries and tools. Electronic hardware is characterized in C + + classes, based on a general purpose class TDAQmodule and implementation in an object-oriented framework makes the system very flexible. The DAQ system provides slow control and event-by-event readout of the Photon Tagger, the Crystal Ball 4-pi electromagnetic calorimeter, central MWPC tracker and plastic-scintillator, particle-ID systems and the TAPS forward-angle calorimeter. A variety of front-end controllers running Linux are supported, reading data from VMEbus, FASTBUS and CAMAC systems. More specialist hardware, based on optical communication systems and developed for the COMPASS experiment at CERN, is also supported. AcquDAQ also provides an interface to configure and control the Mainz programmable trigger system, which uses FPGA-based hardware developed at GSI. Currently the DAQ system runs at data rates of up to 3MB/s and, with upgrades to both hardware and software later this year, we anticipate a doubling of that rate. This work was supported in part by the U.S. DOE Grant No. DE-FG02-99ER41110.

  3. PCDAQ, A Windows Based DAQ System

    NASA Astrophysics Data System (ADS)

    Hogan, Gary

    1998-10-01

    PCDAQ is a Windows NT based general DAQ/Analysis/Monte Carlo shell developed as part of the Proton Radiography project at LANL (Los Alamos National Laboratory). It has been adopted by experiments outside of the Proton Radiography project at Brookhaven National Laboratory (BNL) and at LANL. The program provides DAQ, Monte Carlo, and replay (disk file input) modes. Data can be read from hardware (CAMAC) or other programs (ActiveX servers). Future versions will read VME. User supplied data analysis routines can be written in Fortran, C++, or Visual Basic. Histogramming, testing, and plotting packages are provided. Histogram data can be exported to spreadsheets or analyzed in user supplied programs. Plots can be copied and pasted as bitmap objects into other Windows programs or printed. A text database keyed by the run number is provided. Extensive software control flags are provided so that the user can control the flow of data through the program. Control flags can be set either in script command files or interactively. The program can be remotely controlled and data accessed over the Internet through its ActiveX DCOM interface.

  4. A Compton suppressed detector multiplicity trigger based digital DAQ for gamma-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Das, S.; Samanta, S.; Banik, R.; Bhattacharjee, R.; Basu, K.; Raut, R.; Ghugre, S. S.; Sinha, A. K.; Bhattacharya, S.; Imran, S.; Mukherjee, G.; Bhattacharyya, S.; Goswami, A.; Palit, R.; Tan, H.

    2018-06-01

    The development of a digitizer based pulse processing and data acquisition system for γ-ray spectroscopy with large detector arrays is presented. The system is based on 250 MHz 12-bit digitizers, and is triggered by a user chosen multiplicity of Compton suppressed detectors. The logic for trigger generation is similar to the one practised for analog (NIM/CAMAC) pulse processing electronics, while retaining the fast processing merits of the digitizer system. Codes for reduction of data acquired from the system have also been developed. The system has been tested with offline studies using radioactive sources as well as in the in-beam experiments with an array of Compton suppressed Clover detectors. The results obtained therefrom validate its use in spectroscopic efforts for nuclear structure investigations.

  5. Access to CAMAC from VxWorks and UNIX in DART

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Streets, J.; Meadows, J.; Moore, C.

    1996-02-01

    All High Energy Physics experiments at Fermilab include CAMAC modules which need to be read out for each triggered event. There is also a need to access CAMAC modules for control and monitoring of the experiment. As part of the DART Project the authors have developed a package of software for CAMAC access from UNIX and VxWorks platforms, with support for several hardware interfaces. The authors report on developments for the CES CBD8210 VME to parallel CAMAC, the Hytec VSD2992 VME to serial CAMAC and Jorway 411S SCSI to parallel and serial CAMAC branch drivers, and give a summary ofmore » the timings obtained.« less

  6. Access to CAMAC from VxWorks and UNIX in DART

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Streets, J.; Meadows, J.; Moore, C.

    1995-05-01

    As part of the DART Project the authors have developed a package of software for CAMAC access from UNIX and VxWorks platforms, with support for several hardware interfaces. They report on developments for the CES CBD8210 VME to parallel CAMAC, the Hytec VSD2992 VME to serial CAMAC and Jorway 411S SCSI to parallel and serial CAMAC branch drivers, and give a summary of the timings obtained.

  7. Access to CAMAC from VxWorks and UNIX in DART

    NASA Astrophysics Data System (ADS)

    Streets, J.; Meadows, J.; Moore, C.; Pordes, R.; Slimmer, D.; Vittone, M.; Stern, E.

    1996-02-01

    As part of the DART Project [Data acquisition for the next Generation Fermilab Fixed Target Experiments] we have developed a package of software for CAMAC access from UNIX and VxWorks platforms, with support for several hardware interfaces. We report on developments for the CES CBD8210 VME to parallel CAMAC, the Hytec VSD2992 VME to serial CAMAC and Jorway 411s SCSI to parallel and serial CAMAC branch drivers, and give a summary of the timings obtained.

  8. SLAC-standard CAMAC branch terminator (Engineering Materials)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-04-04

    The drawings listed on the drawing list provide the data and specifications for constructing a Branch Terminator for the SLAC standard CAMAC units. This is a device for matching the cables and other branch lines in the system. This unit is designed for a certain group of SLAC CAMAC units which are referred to as SLAC-Standard CAMAC Units.

  9. TFTR CAMAC systems and components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauch, W.A.; Bergin, W.; Sichta, P.

    1987-08-01

    Princeton's tokamak fusion test reactor (TFTR) utilizes Computer Automated Measurement and Control (CAMAC) to provide instrumentation for real and quasi real time control, monitoring, and data acquisition systems. This paper describes and discusses the complement of CAMAC hardware systems and components that comprise the interface for tokamak control and measurement instrumentation, and communication with the central instrumentation control and data acquisition (CICADA) system. It also discusses CAMAC reliability and calibration, types of modules used, a summary of data acquisition and control points, and various diagnostic maintenance tools used to support and troubleshoot typical CAMAC systems on TFTR.

  10. CAMAC and NIM systems in the space program. [Computer-Aided Measurement And Control and Nuclear Instrumentation Modules

    NASA Technical Reports Server (NTRS)

    Trainor, J. H.; Ehrmann, C. H.; Kaminski, T. J.

    1975-01-01

    The CAMAC and NIM instrumentation systems were developed originally to serve the needs of nuclear research institutions in Europe and North America. CAMAC and NIM are currently considered in several studies at the systems level conducted by NASA and ESRO groups. NIM and CAMAC studies for applications related to the space shuttle are discussed along with the advantages provided by aspects of modularization and standardization, a use of NIM and CAMAC equipment in connection with a group of astrophysics experiments, and questions of cost effectiveness.

  11. Development of a Unix/VME data acquisition system

    NASA Astrophysics Data System (ADS)

    Miller, M. C.; Ahern, S.; Clark, S. M.

    1992-01-01

    The current status of a Unix-based VME data acquisition development project is described. It is planned to use existing Fortran data collection software to drive the existing CAMAC electronics via a VME CAMAC branch driver card and associated Daresbury Unix driving software. The first usable Unix driver has been written and produces single-action CAMAC cycles from test software. The data acquisition code has been implemented in test mode under Unix with few problems and effort is now being directed toward finalizing calls to the CAMAC-driving software and ultimate evaluation of the complete system.

  12. Controlling CAMAC instrumentation through the USB port

    NASA Astrophysics Data System (ADS)

    Ribas, R. V.

    2012-02-01

    A programmable device to interface CAMAC instrumentation to the USB port of computers, without the need of heavy, noisy and expensive CAMAC crates is described in this article. Up to four single-width modules can be used. Also, all software necessary for a multi-parametric data acquisition system was developed. A standard crate-controller based on the same project is being designed.

  13. A CAMAC based real-time noise analysis system for nuclear reactors

    NASA Astrophysics Data System (ADS)

    Ciftcioglu, Özer

    1987-05-01

    A CAMAC based real-time noise analysis system was designed for the TRIGA MARK II nuclear reactor at the Institute for Nuclear Energy, Istanbul. The input analog signals obtained from the radiation detectors are introduced to the system through CAMAC interface. The signals converted into digital form are processed by a PDP-11 computer. The fast data processing based on auto/cross power spectral density computations is carried out by means of assembly written FFT algorithms in real-time and the spectra obtained are displayed on a CAMAC driven display system as an additional monitoring device. The system has the advantage of being software programmable and controlled by a CAMAC system so that it is operated under program control for reactor surveillance, anomaly detection and diagnosis. The system can also be used for the identification of nonstationary operational characteristics of the reactor in long term by comparing the noise power spectra with the corresponding reference noise patterns prepared in advance.

  14. Future of DAQ Frameworks and Approaches, and Their Evolution towards the Internet of Things

    NASA Astrophysics Data System (ADS)

    Neufeld, Niko

    2015-12-01

    Nowadays, a DAQ system is a complex network of processors, sensors and many other active devices. Historically, providing a framework for DAQ has been a very important role of host institutes of experiments. Reviewing evolution of such DAQ frameworks is a very interesting subject of the conference. “Internet of Things” is a recent buzz word but a DAQ framework could be a good example of IoT.

  15. A flexible CAMAC based data system for Space Shuttle scientific instruments

    NASA Technical Reports Server (NTRS)

    Ehrmann, C. H.; Baker, R. G.; Smith, R. L.; Kaminski, T. J.

    1979-01-01

    An effort has been made within NASA to produce a low-cost modular system for implementation of Shuttle payloads based on the CAMAC standards for packaging and data transfer. A key element of such a modular system is a means for controlling the data system, collecting and processing the data for transmission to the ground, and issuing commands to the instrument either from the ground or based on the data collected. A description is presented of such a means based on a network of digital processors and CAMAC crate controllers, which allows for the implementation of instruments ranging from those requiring only a single CAMAC crate of functional modules and no data processing to ones requiring multiple crates and multiple data processors.

  16. A cost and utility analysis of NIM/CAMAC standards and equipment for shuttle payload data acquisition and control systems. Volume 3: Tasks 3 and 4

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The modifications for the Nuclear Instrumentation Modular (NIM) and Computer Automated Measurement Control (CAMAC) equipment, designed for ground based laboratory use, that would be required to permit its use in the Spacelab environments were determined. The cost of these modifications were estimated and the most cost effective approach to implementing them were identified. A shared equipment implementation in which the various Spacelab users draw their required complement of standard NIM and CAMAC equipment for a given flight from a common equipment pool was considered. The alternative approach studied was a dedicated equipment implementation in which each of the users is responsible for procuring either their own NIM/CAMAC equipment or its custom built equivalent.

  17. CAMAC: A Unique Application with a Pocket Terminal.

    DTIC Science & Technology

    1982-09-16

    POCKET TERMINAL S. PERFORMING ORG. REPORT NUMSIER I. AUTWOR(o) S. CONTRACT OR GRANT NUMU41’e() A.D. Elmond S. PERFORMING ORGANIZATION NAME AND ADORIESS 10...port of any CAMAC crate. In addition to being a maintenance device, the HHTT is a " smart " device that can control operations in a CAMAC crate. The...system LSI 11/23 microprocessor through an Asynchronous Serial Port (ASP) interface module. This ASP interface consists of: 1) Crystal Clock 2) MIK -Bus

  18. Using a graphical programming language to write CAMAC/GPIB instrument drivers

    NASA Technical Reports Server (NTRS)

    Zambrana, Horacio; Johanson, William

    1991-01-01

    To reduce the complexities of conventional programming, graphical software was used in the development of instrumentation drivers. The graphical software provides a standard set of tools (graphical subroutines) which are sufficient to program the most sophisticated CAMAC/GPIB drivers. These tools were used and instrumentation drivers were successfully developed for operating CAMAC/GPIB hardware from two different manufacturers: LeCroy and DSP. The use of these tools is presented for programming a LeCroy A/D Waveform Analyzer.

  19. Feasibility study of common electronic equipment for shuttle sortie experiment payloads

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A study was conducted to determine the feasibility of using standardized electronic equipment on the space shuttle vehicle in an effort to reduce the cost estimates. The standards for Nuclear Instrument Modules (NIM) and CAMAC electronic equipment are presented and described. It was determined that the CAMAC electronic equipment was more suitable for use with the space shuttle systems. Specific applications of the CAMAC equipment are analyzed. Illustrations of the equipment and circuit diagrams of the subsystems are provided.

  20. A cost and utility analysis of NIM/CAMAC standards and equipment for shuttle payload data acquisition and control systems. Volume 2: Tasks 1 and 2

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A representative set of payloads for both science and applications disciplines were selected that would ensure a realistic and statistically significant estimate of equipment utilization. The selected payloads were analyzed to determine the applicability of Nuclear Instrumentation Modular (NIM)/Computer Automated Measurement Control (CAMAC) equipment in satisfying their data acquisition and control requirements. The analyses results were combined with the comparable results from related studies to arrive at an overall assessment of the applicability and commonality of NIM/CAMAC equipment usage across the spectrum of payloads.

  1. A CAMAC display module for fast bit-mapped graphics

    NASA Astrophysics Data System (ADS)

    Abdel-Aal, R. E.

    1992-10-01

    In many data acquisition and analysis facilities for nuclear physics research, utilities for the display of two-dimensional (2D) images and spectra on graphics terminals suffer from low speed, poor resolution, and limited accuracy. Development of CAMAC bit-mapped graphics modules for this purpose has been discouraged in the past by the large device count needed and the long times required to load the image data from the host computer into the CAMAC hardware; particularly since many such facilities have been designed to support fast DMA block transfers only for data acquisition into the host. This paper describes the design and implementation of a prototype CAMAC graphics display module with a resolution of 256×256 pixels at eight colours for which all components can be easily accommodated in a single-width package. Employed is a hardware technique which reduces the number of programmed CAMAC data transfer operations needed for writing 2D images into the display memory by approximately an order of magnitude, with attendant improvements in the display speed and CPU time consumption. Hardware and software details are given together with sample results. Information on the performance of the module in a typical VAX/MBD data acquisition environment is presented, including data on the mutual effects of simultaneous data acquisition traffic. Suggestions are made for further improvements in performance.

  2. DAQ application of PC oscilloscope for chaos fiber-optic fence system based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Lu, Manman; Fang, Nian; Wang, Lutang; Huang, Zhaoming; Sun, Xiaofei

    2011-12-01

    In order to obtain simultaneously high sample rate and large buffer in data acquisition (DAQ) for a chaos fiber-optic fence system, we developed a double-channel high-speed DAQ application of a digital oscilloscope of PicoScope 5203 based on LabVIEW. We accomplished it by creating call library function (CLF) nodes to call the DAQ functions in the two dynamic link libraries (DLLs) of PS5000.dll and PS5000wrap.dll provided by Pico Technology Company. The maximum real-time sample rate of the DAQ application can reach 1GS/s. We can control the resolutions of the application at the sample time and data amplitudes by changing their units in the block diagram, and also control the start and end times of the sampling operations. The experimental results show that the application has enough high sample rate and large buffer to meet the demanding DAQ requirements of the chaos fiber-optic fence system.

  3. QMODULE: CAMAC modules recognized by the QAL compiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kellogg, M.; Minor, M.M.; Shlaer, S.

    1977-10-01

    The compiler for the Q Analyzer Language, QAL, recognizes a certain set of CAMAC modules as having known characteristics. The conventions and procedures used to describe these modules are discussed as well as the tools available to the user for extending this set as required.

  4. The DISTO data acquisition system at SATURNE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balestra, F.; Bedfer, Y.; Bertini, R.

    1998-06-01

    The DISTO collaboration has built a large-acceptance magnetic spectrometer designed to provide broad kinematic coverage of multiparticle final states produced in pp scattering. The spectrometer has been installed in the polarized proton beam of the Saturne accelerator in Saclay to study polarization observables in the {rvec p}p {yields} pK{sup +}{rvec Y} (Y = {Lambda}, {Sigma}{sup 0} or Y{sup *}) reaction and vector meson production ({psi}, {omega} and {rho}) in pp collisions. The data acquisition system is based on a VME 68030 CPU running the OS/9 operating system, housed in a single VME crate together with the CAMAC interface, the triplemore » port ECL memories, and four RISC R3000 CPU. The digitization of signals from the detectors is made by PCOS III and FERA front-end electronics. Data of several events belonging to a single Saturne extraction are stored in VME triple-port ECL memories using a hardwired fast sequencer. The buffer, optionally filtered by the RISC R3000 CPU, is recorded on a DLT cassette by DAQ CPU using the on-board SCSI interface during the acceleration cycle. Two UNIX workstations are connected to the VME CPUs through a fast parallel bus and the Local Area Network. They analyze a subset of events for on-line monitoring. The data acquisition system is able to read and record 3,500 ev/burst in the present configuration with a dead time of 15%.« less

  5. Object library for a new generation of experiment-controlling applications under the UNIX operating system.

    PubMed

    Gaponov, Y A; Ito, K; Amemiya, Y

    1998-05-01

    The Interface Object Library based on the Motif extension of the X Windows system and on the ESONE SVIC-VCC Library is presented. Some features of the applications for controlling a synchrotron radiation experiment are discussed. The Interface Object Library is written in the object-oriented C++ language. The library class-hierarchy structure is presented and discussed. Several interfaces were realized in the Interface Object Library: the Windows interface, the CAMAC interface and the interface for supporting the experiment. The behaviour of the objects describing the CAMAC crate and CAMAC block is discussed. The application of these protocols for controlling the fast one-coordinate position-sensitive X-ray detector OD3 is presented.

  6. DAQ for commissioning and calibration of a multichannel analyzer of scintillation counters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tortorici, F.; Jones, M.; Bellini, V.

    We report the status of the Data Acquisition (DAQ) system for the Coordinate Detector (CDET) module of the Super Bigbite Spectrometer facility at Hall A of Thomas Jefferson Accelerator Facility. Presently, the DAQ is fully assembled and tested with one CDET module. The commissioning of CDET module, that is the goal of the tests presented here, consists essentially in the measures of the amplitude and time-over-threshold of signals from cosmic rays. Hardware checks, the developing of DAQ control and off-line analysis software are ongoing; the module currently seems to work roughly accordingly to expectations. Data presented in this note aremore » still preliminary.« less

  7. Level 1 Daq System for Kloe

    NASA Astrophysics Data System (ADS)

    Aloisio, A.; Cavaliere, S.; Cevenini, F.; Della Volpe, D.; Merola, L.; Anastasio, A.; Fiore, D. J.

    KLOE is a general purpose detector optimized to observe CP violation in K0 decays. This detector will be installed at the DAΦNE Φ-factory, in Frascati (Italy) and it is expected to run at the end of 1997. The KLOE DAQ system can be divided mainly into the front-end fast readout section (the Level 1 DAQ), the FDDI Switch and the processor farm. The total bandwidth requirement is estimated to be of the order of 50 Mbyte/s. In this paper, we describe the Level 1 DAQ section, which is based on custom protocols and hardware controllers, developed to achieve high data transfer rates and event building capabilities without software overhead.

  8. High speed acquisition of multiparameter data using a Macintosh IIcx

    NASA Astrophysics Data System (ADS)

    Berno, Anthony; Vogel, John S.; Caffee, Marc

    1991-05-01

    Accelerator mass spectrometry systems based on > 3 MV tandem accelerators often use multianode ionization detectors and/or time-of-flight detectors to identify individual isotopes through multiparameter analysis. A Macintosh IIcx has been programmed to collect AMS data from a CAMAC-implemented analyzer and to display the histogrammed individual parameters and a doubleparameter array. The computer-CAMAC connection is through a NuBus to CAMAC dataway interface which allows direct addressing to all functions and registers in the crate. Asynchronous data from the rare isotope are sorted into a CAMAC memory module by a list sequence controller. Isotope switching is controlled by a one-cycle timing generator. A rate-dependent amount of time is used to transfer the data from the memory module at the end of each timing cycle. The present configuration uses 10-75 ms for rates of 500-10000 cps. Parameter analysis occurs during the rest of the 520 ms data collection cycle. Completed measurements of the isotope concentrations of each sample are written to files which are compatible with standard Macintosh databases or other processing programs. The system is inexpensive and operates at speeds comparable to those obtainable using larger computers.

  9. A data acquisition system for coincidence imaging using a conventional dual head gamma camera

    NASA Astrophysics Data System (ADS)

    Lewellen, T. K.; Miyaoka, R. S.; Jansen, F.; Kaplan, M. S.

    1997-06-01

    A low cost data acquisition system (DAS) was developed to acquire coincidence data from an unmodified General Electric Maxxus dual head scintillation camera. A high impedance pick-off circuit provides position and energy signals to the DAS without interfering with normal camera operation. The signals are pulse-clipped to reduce pileup effects. Coincidence is determined with fast timing signals derived from constant fraction discriminators. A charge-integrating FERA 16 channel ADC feeds position and energy data to two CAMAC FERA memories operated as ping-pong buffers. A Macintosh PowerPC running Labview controls the system and reads the CAMAC memories. A CAMAC 12-channel scaler records singles and coincidence rate data. The system dead-time is approximately 10% at a coincidence rate of 4.0 kHz.

  10. The development and psychometric properties of a measure of clinicians' attitudes to depression: the revised Depression Attitude Questionnaire (R-DAQ).

    PubMed

    Haddad, Mark; Menchetti, Marco; McKeown, Eamonn; Tylee, André; Mann, Anthony

    2015-02-05

    Depression is a common mental disorder associated with substantial disability. It is inadequately recognised and managed, and clinicians' attitudes to this condition and its treatment may play a part in this. Most research in this area has used the Depression Attitude Questionnaire (DAQ), but analyses have shown this measure to exhibit problems in psychometric properties and suitability for the health professionals and settings where depression recognition may occur. We revised the DAQ using a pooled review of findings from studies using this measure, together with a Delphi study which sought the opinions of a panel of relevant experts based in the UK, USA, Australia, and European countries (n = 24) using 3 rounds of questioning to consider attitude dimensions, content, and item wording. After item generation, revision and consensus (agreement >70%) using the Delphi panel, the revised DAQ (R-DAQ) was tested with 1193 health care providers to determine its psychometric properties. Finally the test-retest reliability of the R-DAQ was examined with 38 participants. The 22-item R-DAQ scale showed good internal consistency: Cronbach's alpha coefficient was 0.84; and satisfactory test-retest reliability: intraclass correlation coefficient was 0.62 (95% C.I. 0.37 to 0.78). Exploratory factor analysis favoured a three-factor structure (professional confidence, therapeutic optimism/pessimism, and a generalist perspective), which accounted for 45.3% of the variance. The R-DAQ provides a revised tool for examining clinicians' views and understanding of depression. It addresses important weaknesses in the original measure whilst retaining items and dimensions that appeared valid. This revised scale is likely to be useful in examining attitudes across the health professional workforce and beyond the confines of the UK, and may be valuable for the purpose of evaluating training that aims to address clinicians' attitudes to depression. It incorporates key dimensions of attitudes with a modest number of items making it applicable to use in busy clinical settings.

  11. Performance Evaluation of a Prototype Underwater Short-Range Acoustic Telemetry Modem

    DTIC Science & Technology

    2010-09-01

    SR560 Low-Noise Preamplifier – Hewlett Packard (HP) 3314A Function Generator – Phillips PM 3384 Oscilloscope – Dell Latitude D620 notebook...two Stanford Research Systems SR560 low-noise preamplifiers , an IOtech Personal DAQ/3000 Series data acquisition box (DAQ), an HP 3314A function...mono frequency measurements were two B&K 8103 hydrophones, two Stanford Research Systems SR560 low-noise preamplifiers , an IOtech Personal DAQ/3000

  12. Inexpensive DAQ based physics labs

    NASA Astrophysics Data System (ADS)

    Lewis, Benjamin; Clark, Shane

    2015-11-01

    Quality Data Acquisition (DAQ) based physics labs can be designed using microcontrollers and very low cost sensors with minimal lab equipment. A prototype device with several sensors and documentation for a number of DAQ-based labs is showcased. The device connects to a computer through Bluetooth and uses a simple interface to control the DAQ and display real time graphs, storing the data in .txt and .xls formats. A full device including a larger number of sensors combined with software interface and detailed documentation would provide a high quality physics lab education for minimal cost, for instance in high schools lacking lab equipment or students taking online classes. An entire semester’s lab course could be conducted using a single device with a manufacturing cost of under $20.

  13. Development of the New DAQ System for the SD Array of TA×4 and TALE

    NASA Astrophysics Data System (ADS)

    Takahashi, Yuichi; Sahara, Ryosuke; Konishi, Shogo; Goto, Takashi; Ogio, Shoichi

    The data acquisition (DAQ) system for the surface detector (SD) arrays of TA×4 and TALE will be presented. Each SD records digital signals with 50 MHz FADCs and sends the data to a central communication center (of the "communication tower") via a wireless network system. The technique employed here is based on the currently-running DAQ system of the Telescope Array, and there are some improvements including i) replacement of a wireless LAN module with a custom protocol to another with TCP/IP, and ii) replacement of the "tower electronics" to a generic Linux board PC of Raspberry Pi Type II B. The details and performance of the new DAQ system are described below.

  14. The ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array: camera DAQ software architecture

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Trifoglio, Massimo; Bulgarelli, Andrea; Gianotti, Fulvio; Fioretti, Valentina; Tacchini, Alessandro; Zoli, Andrea; Malaguti, Giuseppe; Capalbi, Milvia; Catalano, Osvaldo

    2014-07-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype of a Small Size dual-mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.

  15. H4DAQ: a modern and versatile data-acquisition package for calorimeter prototypes test-beams

    NASA Astrophysics Data System (ADS)

    Marini, A. C.

    2018-02-01

    The upgrade of the particle detectors for the HL-LHC or for future colliders requires an extensive program of tests to qualify different detector prototypes with dedicated test beams. A common data-acquisition system, H4DAQ, was developed for the H4 test beam line at the North Area of the CERN SPS in 2014 and it has since been adopted in various applications for the CMS experiment and AIDA project. Several calorimeter prototypes and precision timing detectors have used our system from 2014 to 2017. H4DAQ has proven to be a versatile application and has been ported to many other beam test environments. H4DAQ is fast, simple, modular and can be configured to support various kinds of setup. The functionalities of the DAQ core software are split into three configurable finite state machines: data readout, run control, and event builder. The distribution of information and data between the various computers is performed using ZEROMQ (0MQ) sockets. Plugins are available to read different types of hardware, including VME crates with many types of boards, PADE boards, custom front-end boards and beam instrumentation devices. The raw data are saved as ROOT files, using the CERN C++ ROOT libraries. A Graphical User Interface, based on the python gtk libraries, is used to operate the H4DAQ and an integrated data quality monitoring (DQM), written in C++, allows for fast processing of the events for quick feedback to the user. As the 0MQ libraries are also available for the National Instruments LabVIEW program, this environment can easily be integrated within H4DAQ applications.

  16. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Technology developed during a joint research program with Langley and Kinetic Systems Corporation led to Kinetic Systems' production of a high speed Computer Automated Measurement and Control (CAMAC) data acquisition system. The study, which involved the use of CAMAC equipment applied to flight simulation, significantly improved the company's technical capability and produced new applications. With Digital Equipment Corporation, Kinetic Systems is marketing the system to government and private companies for flight simulation, fusion research, turbine testing, steelmaking, etc.

  17. Performance comparison between 8 and 14 bit-depth imaging in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.

    2011-03-01

    We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.

  18. DEAP-3600 Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Lindner, Thomas

    2015-12-01

    DEAP-3600 is a dark matter experiment using liquid argon to detect Weakly Interacting Massive Particles (WIMPs). The DEAP-3600 Data Acquisition (DAQ) has been built using a combination of commercial and custom electronics, organized using the MIDAS framework. The DAQ system needs to suppress a high rate of background events from 39Ar beta decays. This suppression is implemented using a combination of online firmware and software-based event filtering. We will report on progress commissioning the DAQ system, as well as the development of the web-based user interface.

  19. Data acquisition and readout system for the LUX dark matter experiment

    DOE PAGES

    Akerib, D. S.; Bai, X.; Bedikian, S.; ...

    2011-11-28

    LUX is a two-phase (liquid/gas) xenon time projection chamber designed to detect nuclear recoils from interactions with dark matter particles. Signals from the LUX detector are processed by custom-built analog electronics which provide properly shaped signals for the trigger and data acquisition (DAQ) systems. The DAQ is comprised of commercial digitizers with firmware customized for the LUX experiment. Data acquisition systems in rare-event searches must accommodate high rate and large dynamic range during precision calibrations involving radioactive sources, while also delivering low threshold for maximum sensitivity. The LUX DAQ meets these challenges using real-time baseline sup- pression that allows formore » a maximum event acquisition rate in excess of 1.5 kHz with virtually no deadtime. This work describes the LUX DAQ and the novel acquisition techniques employed in the LUX experiment.« less

  20. A CAMAC-VME-Macintosh data acquisition system for nuclear experiments

    NASA Astrophysics Data System (ADS)

    Anzalone, A.; Giustolisi, F.

    1989-10-01

    A multiprocessor system for data acquisition and analysis in low-energy nuclear physics has been realized. The system is built around CAMAC, the VMEbus, and the Macintosh PC. Multiprocessor software has been developed, using RTF, MACsys, and CERN cross-software. The execution of several programs that run on several VME CPUs and on an external PC is coordinated by a mailbox protocol. No operating system is used on the VME CPUs. The hardware, software, and system performance are described.

  1. Applications of AN OO Methodology and Case to a Daq System

    NASA Astrophysics Data System (ADS)

    Bee, C. P.; Eshghi, S.; Jones, R.; Kolos, S.; Magherini, C.; Maidantchik, C.; Mapelli, L.; Mornacchi, G.; Niculescu, M.; Patel, A.; Prigent, D.; Spiwoks, R.; Soloviev, I.; Caprini, M.; Duval, P. Y.; Etienne, F.; Ferrato, D.; Le van Suu, A.; Qian, Z.; Gaponenko, I.; Merzliakov, Y.; Ambrosini, G.; Ferrari, R.; Fumagalli, G.; Polesello, G.

    The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAQ components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral lifecycle it supports.

  2. Dopamine quinones activate microglia and induce a neurotoxic gene expression profile: relationship to methamphetamine-induced nerve ending damage.

    PubMed

    Kuhn, Donald M; Francescutti-Verbeem, Dina M; Thomas, David M

    2006-08-01

    Methamphetamine (METH) intoxication leads to persistent damage of dopamine (DA) nerve endings of the striatum. Recently, we and others have suggested that the neurotoxicity associated with METH is mediated by extensive microglial activation. DA itself has been shown to play an obligatory role in METH neurotoxicity, possibly through the formation of quinone species. We show presently that DA-quinones (DAQ) cause a time-dependent activation of cultured microglial cells. Microarray analysis of the effects of DAQ on microglial gene expression revealed that 101 genes were significantly changed in expression, with 73 genes increasing and 28 genes decreasing in expression. Among those genes differentially regulated by DAQ were those often associated with neurotoxic conditions including inflammation, cytokines, chemokines, and prostaglandins. In addition, microglial genes associated with a neuronally protective phenotype were among those that were downregulated by DAQ. These results implicate DAQ as one species that could cause early activation of microglial cells in METH intoxication, manifested as an alteration in the expression of a broad biomarker panel of genes. These results also link oxidative stress, chemical alterations in DA to its quinone, and microglial activation as part of a cascade of glial-neuronal crosstalk that can amplify METH-induced neurotoxicity.

  3. CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak

    NASA Astrophysics Data System (ADS)

    Vanderlaan, J. F.; Cummings, J. W.

    1993-10-01

    The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPU's. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT&T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates.

  4. Data acquisition system issues for large experiments

    NASA Astrophysics Data System (ADS)

    Siskind, E. J.

    2007-09-01

    This talk consists of personal observations on two classes of data acquisition ("DAQ") systems for Silicon trackers in large experiments with which the author has been concerned over the last three or more years. The first half is a classic "lessons learned" recital based on experience with the high-level debug and configuration of the DAQ system for the GLAST LAT detector. The second half is concerned with a discussion of the promises and pitfalls of using modern (and future) generations of "system-on-a-chip" ("SOC") or "platform" field-programmable gate arrays ("FPGAs") in future large DAQ systems. The DAQ system pipeline for the 864k channels of Si tracker in the GLAST LAT consists of five tiers of hardware buffers which ultimately feed into the main memory of the (two-active-node) level-3 trigger processor farm. The data formats and buffer volumes of these tiers are briefly described, as well as the flow control employed between successive tiers. Lessons learned regarding data formats, buffer volumes, and flow control/data discard policy are discussed. The continued development of platform FPGAs containing large amounts of configurable logic fabric, embedded PowerPC hard processor cores, digital signal processing components, large volumes of on-chip buffer memory, and multi-gigabit serial I/O capability permits DAQ system designers to vastly increase the amount of data preprocessing that can be performed in parallel within the DAQ pipeline for detector systems in large experiments. The capabilities of some currently available FPGA families are reviewed, along with the prospects for next-generation families of announced, but not yet available, platform FPGAs. Some experience with an actual implementation is presented, and reconciliation between advertised and achievable specifications is attempted. The prospects for applying these components to space-borne Si tracker detectors are briefly discussed.

  5. Asymmetric Data Acquisition System for an Endoscopic PET-US Detector

    NASA Astrophysics Data System (ADS)

    Zorraquino, Carlos; Bugalho, Ricardo; Rolo, Manuel; Silva, Jose C.; Vecklans, Viesturs; Silva, Rui; Ortigão, Catarina; Neves, Jorge A.; Tavernier, Stefaan; Guerra, Pedro; Santos, Andres; Varela, João

    2016-02-01

    According to current prognosis studies of pancreatic cancer, survival rate nowadays is still as low as 6% mainly due to late detections. Taking into account the location of the disease within the body and making use of the level of miniaturization in radiation detectors that can be achieved at the present time, EndoTOFPET-US collaboration aims at the development of a multimodal imaging technique for endoscopic pancreas exams that combines the benefits of high resolution metabolic information from time-of- flight (TOF) positron emission tomography (PET) with anatomical information from ultrasound (US). A system with such capabilities calls for an application-specific high-performance data acquisition system (DAQ) able to control and readout data from different detectors. The system is composed of two novel detectors: a PET head extension for a commercial US endoscope placed internally close to the region-of-interest (ROI) and a PET plate placed over the patient's abdomen in coincidence with the PET head. These two detectors will send asymmetric data streams that need to be handled by the DAQ system. The approach chosen to cope with these needs goes through the implementation of a DAQ capable of performing multi-level triggering and which is distributed across two different on-detector electronics and the off-detector electronics placed inside the reconstruction workstation. This manuscript provides an overview on the design of this innovative DAQ system and, based on results obtained by means of final prototypes of the two detectors and DAQ, we conclude that a distributed multi-level triggering DAQ system is suitable for endoscopic PET detectors and it shows potential for its application in different scenarios with asymmetric sources of data.

  6. An FPGA-Based High-Speed Error Resilient Data Aggregation and Control for High Energy Physics Experiment

    NASA Astrophysics Data System (ADS)

    Mandal, Swagata; Saini, Jogender; Zabołotny, Wojciech M.; Sau, Suman; Chakrabarti, Amlan; Chattopadhyay, Subhasis

    2017-03-01

    Due to the dramatic increase of data volume in modern high energy physics (HEP) experiments, a robust high-speed data acquisition (DAQ) system is very much needed to gather the data generated during different nuclear interactions. As the DAQ works under harsh radiation environment, there is a fair chance of data corruption due to various energetic particles like alpha, beta, or neutron. Hence, a major challenge in the development of DAQ in the HEP experiment is to establish an error resilient communication system between front-end sensors or detectors and back-end data processing computing nodes. Here, we have implemented the DAQ using field-programmable gate array (FPGA) due to some of its inherent advantages over the application-specific integrated circuit. A novel orthogonal concatenated code and cyclic redundancy check (CRC) have been used to mitigate the effects of data corruption in the user data. Scrubbing with a 32-b CRC has been used against error in the configuration memory of FPGA. Data from front-end sensors will reach to the back-end processing nodes through multiple stages that may add an uncertain amount of delay to the different data packets. We have also proposed a novel memory management algorithm that helps to process the data at the back-end computing nodes removing the added path delays. To the best of our knowledge, the proposed FPGA-based DAQ utilizing optical link with channel coding and efficient memory management modules can be considered as first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, bit error rate, efficiency, and robustness to radiation.

  7. Development of MATLAB software to control data acquisition from a multichannel systems multi-electrode array.

    PubMed

    Messier, Erik

    2016-08-01

    A Multichannel Systems (MCS) microelectrode array data acquisition (DAQ) unit is used to collect multichannel electrograms (EGM) from a Langendorff perfused rabbit heart system to study sudden cardiac death (SCD). MCS provides software through which data being processed by the DAQ unit can be displayed and saved, but this software's combined utility with MATLAB is not very effective. MCSs software stores recorded EGM data in a MathCad (MCD) format, which is then converted to a text file format. These text files are very large, and it is therefore very time consuming to import the EGM data into MATLAB for real-time analysis. Therefore, customized MATLAB software was developed to control the acquisition of data from the MCS DAQ unit, and provide specific laboratory accommodations for this study of SCD. The developed DAQ unit control software will be able to accurately: provide real time display of EGM signals; record and save EGM signals in MATLAB in a desired format; and produce real time analysis of the EGM signals; all through an intuitive GUI.

  8. Orthos, an alarm system for the ALICE DAQ operations

    NASA Astrophysics Data System (ADS)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy

    2012-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  9. Research of Fast DAQ system in KSTAR Thomson scattering diagnostic

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Kim, H. J.; Yamada, I.; Funaba, H.; Kim, Y. G.; Kim, D. Y.

    2017-12-01

    The Thomson scattering diagnostic is one of the most important diagnostic systems in fusion plasma research. It provides reliable electron temperature and density profiles in magnetically confined plasma. A Q-switched Nd:YAG Thomson system was installed several years ago in KSTAR tokamak to measure the electron temperature and density profiles. For the KSTAR Thomson scattering system, a Charge-to-Digital Conversion (QDC) type data acquisition system was used to measure a pulse type Thomson signal. Recently, however, an error was found during the Te, ne calculation, because the QDC system had integrated the pulse Thomson signal that included a signal similar to stray light. To overcome such errors, we introduce a fast data acquisition (F-DAQ) system. To test this, we use CAEN V1742 5 GS/s, a Versa Module Eurocard Bus (VMEbus) type 12-bit switched capacitor digitizer with 32 channels. In this experiment, we compare the calculated Te results of Thomson scattering data measured simultaneously using QDC and F-DAQ. In the F-DAQ system, the shape of the pulse was restored by fitting.

  10. Automating spectral measurements

    NASA Astrophysics Data System (ADS)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  11. Data Acquisition (DAQ) system dedicated for remote sensing applications on Unmanned Aerial Vehicles (UAV)

    NASA Astrophysics Data System (ADS)

    Keleshis, C.; Ioannou, S.; Vrekoussis, M.; Levin, Z.; Lange, M. A.

    2014-08-01

    Continuous advances in unmanned aerial vehicles (UAV) and the increased complexity of their applications raise the demand for improved data acquisition systems (DAQ). These improvements may comprise low power consumption, low volume and weight, robustness, modularity and capability to interface with various sensors and peripherals while maintaining the high sampling rates and processing speeds. Such a system has been designed and developed and is currently integrated on the Autonomous Flying Platforms for Atmospheric and Earth Surface Observations (APAESO/NEA-YΠOΔOMH/NEKΠ/0308/09) however, it can be easily adapted to any UAV or any other mobile vehicle. The system consists of a single-board computer with a dual-core processor, rugged surface-mount memory and storage device, analog and digital input-output ports and many other peripherals that enhance its connectivity with various sensors, imagers and on-board devices. The system is powered by a high efficiency power supply board. Additional boards such as frame-grabbers, differential global positioning system (DGPS) satellite receivers, general packet radio service (3G-4G-GPRS) modems for communication redundancy have been interfaced to the core system and are used whenever there is a mission need. The onboard DAQ system can be preprogrammed for automatic data acquisition or it can be remotely operated during the flight from the ground control station (GCS) using a graphical user interface (GUI) which has been developed and will also be presented in this paper. The unique design of the GUI and the DAQ system enables the synchronized acquisition of a variety of scientific and UAV flight data in a single core location. The new DAQ system and the GUI have been successfully utilized in several scientific UAV missions. In conclusion, the novel DAQ system provides the UAV and the remote-sensing community with a new tool capable of reliably acquiring, processing, storing and transmitting data from any sensor integrated on an UAV.

  12. SM-4 computer and CAMAC equipment in automating a ROMS-2A mass spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bulgakov, R.Sh.; Gafurov, R.A.; Safin, D.N.

    1988-05-01

    A data acquisition system was developed which provides for measuring mass peaks on three parameters as well as measuring working parameters. The computer memory stores 40 kHz signals, with an ADC cycle time of 10 microsec. A structural diagram is given together with upgrades made in the CAMAC apparatus to improve the speed and extend the application. Working parameters such as the pressures in pipes and the temperature in the test zone can be monitored by splitting the measurements into two cycles alternating with a controlled frequency: gas composition and technological parameter measurement.

  13. A new data acquisition system for the CMS Phase 1 pixel detector

    NASA Astrophysics Data System (ADS)

    Kornmayer, A.

    2016-12-01

    A new pixel detector will be installed in the CMS experiment during the extended technical stop of the LHC at the beginning of 2017. The new pixel detector, built from four layers in the barrel region and three layers on each end of the forward region, is equipped with upgraded front-end readout electronics, specifically designed to handle the high particle hit rates created in the LHC environment. The DAQ back-end was entirely redesigned to handle the increased number of readout channels, the higher data rates per channel and the new digital data format. Based entirely on the microTCA standard, new front-end controller (FEC) and front-end driver (FED) cards have been developed, prototyped and produced with custom optical link mezzanines mounted on the FC7 AMC and custom firmware. At the same time as the new detector is being assembled, the DAQ system is set up and its integration into the CMS central DAQ system tested by running the pilot blade detector already installed in CMS. This work describes the DAQ system, integration tests and gives an outline for the activities up to commissioning the final system at CMS in 2017.

  14. Web-based DAQ systems: connecting the user and electronics front-ends

    NASA Astrophysics Data System (ADS)

    Lenzi, Thomas

    2016-12-01

    Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.

  15. Data Acquisition System of Nobeyama MKID Camera

    NASA Astrophysics Data System (ADS)

    Nagai, M.; Hisamatsu, S.; Zhai, G.; Nitta, T.; Nakai, N.; Kuno, N.; Murayama, Y.; Hattori, S.; Mandal, P.; Sekimoto, Y.; Kiuchi, H.; Noguchi, T.; Matsuo, H.; Dominjon, A.; Sekiguchi, S.; Naruse, M.; Maekawa, J.; Minamidani, T.; Saito, M.

    2018-05-01

    We are developing a superconducting camera based on microwave kinetic inductance detectors (MKIDs) to observe 100-GHz continuum with the Nobeyama 45-m telescope. A data acquisition (DAQ) system for the camera has been designed to operate the MKIDs with the telescope. This system is required to connect the telescope control system (COSMOS) to the readout system of the MKIDs (MKID DAQ) which employs the frequency-sweeping probe scheme. The DAQ system is also required to record the reference signal of the beam switching for the demodulation by the analysis pipeline in order to suppress the sky fluctuation. The system has to be able to merge and save all data acquired both by the camera and by the telescope, including the cryostat temperature and pressure and the telescope pointing. A collection of software which implements these functions and works as a TCP/IP server on a workstation was developed. The server accepts commands and observation scripts from COSMOS and then issues commands to MKID DAQ to configure and start data acquisition. We made a commissioning of the MKID camera on the Nobeyama 45-m telescope and obtained successful scan signals of the atmosphere and of the Moon.

  16. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.

  17. A memory module for experimental data handling

    NASA Astrophysics Data System (ADS)

    De Blois, J.

    1985-02-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory chian internal connector. The memory unit will be used in Mössbauer experiments and in thermal neutron scattering experiments.

  18. Evaluation of Discrimination Technologies and Classification Results Live Site Demonstration: Former Waikoloa Maneuver Area

    DTIC Science & Technology

    2015-06-01

    National Instruments. The National Instruments DAQ is a full-featured PC running Windows 7. The DAQ, electromagnetic transmitter , and batteries for the... electromagnetic induction Environet Environet, Inc. ESTCP Environmental Security Technology Certification Program ftp file transfer protocol FUDS formerly used...capabilities of a currently available advanced electromagnetic induction sensor developed specifically for discrimination on real sites under operational

  19. In vitro and in vivo effects of 2,4 diaminoquinazoline inhibitors of the decapping scavenger enzyme DcpS: Context-specific modulation of SMN transcript levels

    PubMed Central

    Androphy, Elliot J.; Calo, Alessandro; Potter, Kyle; Custer, Sara K.; Du, Sarah; Foley, Timothy L.; Gopalsamy, Ariamala; Reedich, Emily J.; Gordo, Susana M.; Gordon, William; Hosea, Natalie; Jones, Lyn H.; Krizay, Daniel K.; LaRosa, Gregory; Li, Hongxia; Mathur, Sachin; Menard, Carol A.; Patel, Paraj; Ramos-Zayas, Rebeca; Rietz, Anne; Rong, Haojing; Zhang, Baohong; Tones, Michael A.

    2017-01-01

    C5-substituted 2,4-diaminoquinazoline inhibitors of the decapping scavenger enzyme DcpS (DAQ-DcpSi) have been developed for the treatment of spinal muscular atrophy (SMA), which is caused by genetic deficiency in the Survival Motor Neuron (SMN) protein. These compounds are claimed to act as SMN2 transcriptional activators but data underlying that claim are equivocal. In addition it is unclear whether the claimed effects on SMN2 are a direct consequence of DcpS inhibitor or might be a consequence of lysosomotropism, which is known to be neuroprotective. DAQ-DcpSi effects were characterized in cells in vitro utilizing DcpS knockdown and 7-methyl analogues as probes for DcpS vs non-DcpS-mediated effects. We also performed analysis of Smn transcript levels, RNA-Seq analysis of the transcriptome and SMN protein in order to identify affected pathways underlying the therapeutic effect, and studied lysosomotropic and non-lysosomotropic DAQ-DCpSi effects in 2B/- SMA mice. Treatment of cells caused modest and transient SMN2 mRNA increases with either no change or a decrease in SMNΔ7 and no change in SMN1 transcripts or SMN protein. RNA-Seq analysis of DAQ-DcpSi-treated N2a cells revealed significant changes in expression (both up and down) of approximately 2,000 genes across a broad range of pathways. Treatment of 2B/- SMA mice with both lysomotropic and non-lysosomotropic DAQ-DcpSi compounds had similar effects on disease phenotype indicating that the therapeutic mechanism of action is not a consequence of lysosomotropism. In striking contrast to the findings in vitro, Smn transcripts were robustly changed in tissues but there was no increase in SMN protein levels in spinal cord. We conclude that DAQ-DcpSi have reproducible benefit in SMA mice and a broad spectrum of biological effects in vitro and in vivo, but these are complex, context specific, and not the result of simple SMN2 transcriptional activation. PMID:28945765

  20. Development of a cost-effective and flexible vibration DAQ system for long-term continuous structural health monitoring

    NASA Astrophysics Data System (ADS)

    Nguyen, Theanh; Chan, Tommy H. T.; Thambiratnam, David P.; King, Les

    2015-12-01

    In the structural health monitoring (SHM) field, long-term continuous vibration-based monitoring is becoming increasingly popular as this could keep track of the health status of structures during their service lives. However, implementing such a system is not always feasible due to on-going conflicts between budget constraints and the need of sophisticated systems to monitor real-world structures under their demanding in-service conditions. To address this problem, this paper presents a comprehensive development of a cost-effective and flexible vibration DAQ system for long-term continuous SHM of a newly constructed institutional complex with a special focus on the main building. First, selections of sensor type and sensor positions are scrutinized to overcome adversities such as low-frequency and low-level vibration measurements. In order to economically tackle the sparse measurement problem, a cost-optimized Ethernet-based peripheral DAQ model is first adopted to form the system skeleton. A combination of a high-resolution timing coordination method based on the TCP/IP command communication medium and a periodic system resynchronization strategy is then proposed to synchronize data from multiple distributed DAQ units. The results of both experimental evaluations and experimental-numerical verifications show that the proposed DAQ system in general and the data synchronization solution in particular work well and they can provide a promising cost-effective and flexible alternative for use in real-world SHM projects. Finally, the paper demonstrates simple but effective ways to make use of the developed monitoring system for long-term continuous structural health evaluation as well as to use the instrumented building herein as a multi-purpose benchmark structure for studying not only practical SHM problems but also synchronization related issues.

  1. artdaq: DAQ software development made simple

    NASA Astrophysics Data System (ADS)

    Biery, Kurt; Flumerfelt, Eric; Freeman, John; Ketchum, Wesley; Lukhanin, Gennadiy; Rechenmacher, Ron

    2017-10-01

    For a few years now, the artdaq data acquisition software toolkit has provided numerous experiments with ready-to-use components which allow for rapid development and deployment of DAQ systems. Developed within the Fermilab Scientific Computing Division, artdaq provides data transfer, event building, run control, and event analysis functionality. This latter feature includes built-in support for the art event analysis framework, allowing experiments to run art modules for real-time filtering, compression, disk writing and online monitoring. As art, also developed at Fermilab, is also used for offline analysis, a major advantage of artdaq is that it allows developers to easily switch between developing online and offline software. artdaq continues to be improved. Support for an alternate mode of running whereby data from some subdetector components are only streamed if requested has been added; this option will reduce unnecessary DAQ throughput. Real-time reporting of DAQ metrics has been implemented, along with the flexibility to choose the format through which experiments receive the reports; these formats include the Ganglia, Graphite and syslog software packages, along with flat ASCII files. Additionally, work has been performed investigating more flexible modes of online monitoring, including the capability to run multiple online monitoring processes on different hosts, each running its own set of art modules. Finally, a web-based GUI interface through which users can configure details of their DAQ system has been implemented, increasing the ease of use of the system. Already successfully deployed on the LArlAT, DarkSide-50, DUNE 35ton and Mu2e experiments, artdaq will be employed for SBND and is a strong candidate for use on ICARUS and protoDUNE. With each experiment comes new ideas for how artdaq can be made more flexible and powerful. The above improvements will be described, along with potential ideas for the future.

  2. Online polarimetry of the Nuclotron internal deuteron and proton beams

    NASA Astrophysics Data System (ADS)

    Isupov, A. Yu

    2017-12-01

    The spin studies at Nuclotron require fast and precise determination of the deuteron and proton beam polarization. For these purposes new powerful VME-based data acquisition (DAQ) system has been designed for the Deuteron Spin Structure setup placed at the Nuclotron Internal Target Station. The DAQ system is built using the netgraph-based data acquisition and processing framework ngdp. The software dealing with VME hardware is a set of netgraph nodes in form of the loadable kernel modules, so works in the operating system kernel context. The specific for current implementation nodes and user context utilities are described. The online events representation by ROOT classes allows us to generalize code for histograms filling and polarization calculations. The DAQ system was successfully used during 53rd and 54th Nuclotron runs, and their suitability for online polarimetry is demonstrated.

  3. Antioxidant intake and bone status in a cross-sectional study of Brazilian women with osteoporosis.

    PubMed

    De França, Natasha A G; Camargo, Marilia B R; Lazaretti-Castro, Marise; Martini, Lígia Araújo

    2013-04-01

    This study aimed to investigate the association between antioxidant intake and bone mineral density (BMD) in postmenopausal women with osteoporosis. We conducted a cross-sectional study with 150 women, mean age 68.7 (SD 9.1) years. BMD and body composition were obtained using dual-energy X-ray absorptiometry (DXA). We assessed anthropometric measures and dietary intake and applied an adapted Dietary Antioxidant Quality Score (a-DAQS) to evaluate the antioxidant consumption. 65.3% of women had higher scores on the a-DAQS. We found no relationship between the a-DAQS and BMD; however, we observed an inverse correlation between vitamin A and lumbar spine (LS) BMD in g/cm(2) (r = - 0.201; p = 0.013). An analysis of variance (ANOVA) test also showed that vitamin A was negatively associated with the LS BMD (F = 6.143; p = 0.013, but without significance when a multivariate analysis was applied. The a-DAQS did not have an association with BMD; however, Vitamin A showed a negative correlation with BMD, but such an association disappeared when the other antioxidants were taken together. Our findings encourage an antioxidant-based dietary approach to osteoporosis prevention and treatment, since the negative effect of vitamin A was neutralized by the intake of such nutrients. © The Author(s) 2015.

  4. On the Sensitivity of the HAWC Observatory to Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Hays, E.; McEnery, Julie E.

    2011-01-01

    We present the sensitivity of HAWC to Gamma Ray Bursts (GRBs). HAWC is a very high-energy gamma-ray observatory currently under construction in Mexico at an altitude of 4100 m. It will observe atmospheric air showers via the water Cherenkov method. HAWC will consist of 300 large water tanks instrumented with 4 photomultipliers each. HAWC has two data acquisition (DAQ) systems. The main DAQ system reads out coincident signals in the tanks and reconstructs the direction and energy of individual atmospheric showers. The scaler DAQ counts the hits in each photomultiplier tube (PMT) in the detector and searches for a statistical excess over the noise of all PMTs. We show that HAWC has a realistic opportunity to observe the high-energy power law components of GRBs that extend at least up to 30 GeV, as it has been observed by Fermi LAT. The two DAQ systems have an energy threshold that is low enough to observe events similar to GRB 090510 and GRB 090902b with the characteristics observed by Fermi LAT. HAWC will provide information about the high-energy spectra of GRBs which in turn will lead to understanding about e-pair attenuation in GRB jets, extragalactic background light absorption, as well as establishing the highest energy to which GRBs accelerate particles.

  5. A Low-Cost Modular Platform for Heterogeneous Data Acquisition with Accurate Interchannel Synchronization

    PubMed Central

    Blanco-Claraco, José Luis; López-Martínez, Javier; Torres-Moreno, José Luis; Giménez-Fernández, Antonio

    2015-01-01

    Most experimental fields of science and engineering require the use of data acquisition systems (DAQ), devices in charge of sampling and converting electrical signals into digital data and, typically, performing all of the required signal preconditioning. Since commercial DAQ systems are normally focused on specific types of sensors and actuators, systems engineers may need to employ mutually-incompatible hardware from different manufacturers in applications demanding heterogeneous inputs and outputs, such as small-signal analog inputs, differential quadrature rotatory encoders or variable current outputs. A common undesirable side effect of heterogeneous DAQ hardware is the lack of an accurate synchronization between samples captured by each device. To solve such a problem with low-cost hardware, we present a novel modular DAQ architecture comprising a base board and a set of interchangeable modules. Our main design goal is the ability to sample all sources at predictable, fixed sampling frequencies, with a reduced synchronization mismatch (<1 μs) between heterogeneous signal sources. We present experiments in the field of mechanical engineering, illustrating vibration spectrum analyses from piezoelectric accelerometers and, as a novelty in these kinds of experiments, the spectrum of quadrature encoder signals. Part of the design and software will be publicly released online. PMID:26516865

  6. Use of statecharts in the modelling of dynamic behaviour in the ATLAS DAQ prototype-1

    NASA Astrophysics Data System (ADS)

    Croll, P.; Duval, P.-Y.; Jones, R.; Kolos, S.; Sari, R. F.; Wheeler, S.

    1998-08-01

    Many applications within the ATLAS DAQ prototype-1 system have complicated dynamic behaviour which can be successfully modelled in terms of states and transitions between states. Previously, state diagrams implemented as finite-state machines have been used. Although effective, they become ungainly as system size increases. Harel statecharts address this problem by implementing additional features such as hierarchy and concurrency. The CHSM object-oriented language system is freeware which implements Harel statecharts as concurrent, hierarchical, finite-state machines (CHSMs). An evaluation of this language system by the ATLAS DAQ group has shown it to be suitable for describing the dynamic behaviour of typical DAQ applications. The language is currently being used to model the dynamic behaviour of the prototype-1 run-control system. The design is specified by means of a CHSM description file, and C++ code is obtained by running the CHSM compiler on the file. In parallel with the modelling work, a code generator has been developed which translates statecharts, drawn using the StP CASE tool, into the CHSM language. C++ code, describing the dynamic behaviour of the run-control system, has been successfully generated directly from StP statecharts using the CHSM generator and compiler. The validity of the design was tested using the simulation features of the Statemate CASE tool.

  7. Three Generations of FPGA DAQ Development for the ATLAS Pixel Detector

    NASA Astrophysics Data System (ADS)

    Mayer, Joseph A., II

    The Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN) tracks a schedule of long physics runs, followed by periods of inactivity known as Long Shutdowns (LS). During these LS phases both the LHC, and the experiments around its ring, undergo maintenance and upgrades. For the LHC these upgrades improve their ability to create data for physicists; the more data the LHC can create the more opportunities there are for rare events to appear that physicists will be interested in. The experiments upgrade so they can record the data and ensure the event won't be missed. Currently the LHC is in Run 2 having completed the first LS of three. This thesis focuses on the development of Field-Programmable Gate Array (FPGA)-based readout systems that span across three major tasks of the ATLAS Pixel data acquisition (DAQ) system. The evolution of Pixel DAQ's Readout Driver (ROD) card is presented. Starting from improvements made to the new Insertable B-Layer (IBL) ROD design, which was part of the LS1 upgrade; to upgrading the old RODs from Run 1 to help them run more efficiently in Run 2. It also includes the research and development of FPGA based DAQs and integrated circuit emulators for the ITk upgrade which will occur during LS3 in 2025.

  8. TRAMP; The next generation data acquisition for RTP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Haren, P.C.; Wijnoltz, F.

    1992-04-01

    The Rijnhuizen Tokamak Project RTP is a medium-sized tokamak experiment, which requires a very reliable data-acquisition system, due to its pulsed nature. Analyzing the limitations of an existing CAMAC-based data-acquisition system showed, that substantial increase of performance and flexibility could best be obtained by the construction of an entirely new system. This paper discusses this system, CALLED TRAMP (Transient Recorder and Amoeba Multi Processor), based on tailor-made transient recorders with a multiprocessor computer system in VME running Amoeba. The performance of TRAMP exceeds the performance of the CAMAC system by a factor of four. The plans to increase the flexibilitymore » and for a further increase of performance are presented.« less

  9. A High-Temperature Combinatorial Technique for the Thermal Analysis of Materials

    DTIC Science & Technology

    2008-07-14

    the calorimetric cell. The power dissipated in the thermistor is determined experimentally from the current supplied to the thermistor and the...electronics unit operates as a power supply for the PnSC sensors and as a data acquisition (DAQ) system for the input/output signals from each sensor. Both...the power supply and DAQ operations are galvanically isolated to ensure a maximum signal to noise ratio for the acquired signals. The control

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Gerry; et al.

    The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version ofmore » the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.« less

  11. Reviews Toy: Air swimmers Book: Their Arrows will Darken the Sun: The Evolution and Science of Ballistics Book: Physics Experiments for your Bag Book: Quantum Physics for Poets Equipment: SEP colour wheel kit Equipment: SEP colour mixing kit Software: USB DrDAQ App: iHandy Level Equipment: Photonics Explorer kit Web Watch

    NASA Astrophysics Data System (ADS)

    2012-01-01

    WE RECOMMEND Air swimmers Helium balloon swims like a fish Their Arrows will Darken the Sun: The Evolution and Science of Ballistics Ballistics book hits the spot Physics Experiments for your Bag Handy experiments for your lessons Quantum Physics for Poets Book shows the economic importance of physics SEP colour wheel kit Wheels investigate colour theory SEP colour mixing kit Cheap colour mixing kit uses red, green and blue LEDs iHandy Level iPhone app superbly measures angles Photonics Explorer kit Free optics kit given to schools WORTH A LOOK DrDAQ DrDAQ software gets an upgrade WEB WATCH Websites show range of physics

  12. A New Event Builder for CMS Run II

    NASA Astrophysics Data System (ADS)

    Albertsson, K.; Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.

  13. A new event builder for CMS Run II

    DOE PAGES

    Albertsson, K.; Andre, J-M; Andronidis, A.; ...

    2015-12-23

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Innibandmore » FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. In conclusion, ee present performance measurements from small-scale prototypes and from the full-scale production system.« less

  14. The psychometric characteristics of the revised depression attitude questionnaire (R-DAQ) in Pakistani medical practitioners: a cross-sectional study of doctors in Lahore.

    PubMed

    Haddad, Mark; Waqas, Ahmed; Sukhera, Ahmed Bashir; Tarar, Asad Zaman

    2017-07-27

    Depression is common mental health problem and leading contributor to the global burden of disease. The attitudes and beliefs of the public and of health professionals influence social acceptance and affect the esteem and help-seeking of people experiencing mental health problems. The attitudes of clinicians are particularly relevant to their role in accurately recognising and providing appropriate support and management of depression. This study examines the characteristics of the revised depression attitude questionnaire (R-DAQ) with doctors working in healthcare settings in Lahore, Pakistan. A cross-sectional survey was conducted in 2015 using the revised depression attitude questionnaire (R-DAQ). A convenience sample of 700 medical practitioners based in six hospitals in Lahore was approached to participate in the survey. The R-DAQ structure was examined using Parallel Analysis from polychoric correlations. Unweighted least squares analysis (ULSA) was used for factor extraction. Model fit was estimated using goodness-of-fit indices and the root mean square of standardized residuals (RMSR), and internal consistency reliability for the overall scale and subscales was assessed using reliability estimates based on Mislevy and Bock (BILOG 3 Item analysis and test scoring with binary logistic models. Mooresville: Scientific Software, 55) and the McDonald's Omega statistic. Findings using this approach were compared with principal axis factor analysis based on Pearson correlation matrix. 601 (86%) of the doctors approached consented to participate in the study. Exploratory factor analysis of R-DAQ scale responses demonstrated the same 3-factor structure as in the UK development study, though analyses indicated removal of 7 of the 22 items because of weak loading or poor model fit. The 3 factor solution accounted for 49.8% of the common variance. Scale reliability and internal consistency were adequate: total scale standardised alpha was 0.694; subscale reliability for professional confidence was 0.732, therapeutic optimism/pessimism was 0.638, and generalist perspective was 0.769. The R-DAQ was developed with a predominantly UK-based sample of health professionals. This study indicates that this scale functions adequately and provides a valid measure of depression attitudes for medical practitioners in Pakistan, with the same factor structure as in the scale development sample. However, optimal scale function necessitated removal of several items, with a 15-item scale enabling the most parsimonious factor solution for this population.

  15. Readout and DAQ for Pixel Detectors

    NASA Astrophysics Data System (ADS)

    Platkevic, Michal

    2010-01-01

    Data readout and acquisition control of pixel detectors demand the transfer of significantly a large amounts of bits between the detector and the computer. For this purpose dedicated interfaces are used which are designed with focus on features like speed, small dimensions or flexibility of use such as digital signal processors, field-programmable gate arrays (FPGA) and USB communication ports. This work summarizes the readout and DAQ system built for state-of-the-art pixel detectors of the Medipix family.

  16. A faster and more reliable data acquisition system for the full performance of the SciCRT

    DOE PAGES

    Sasai, Y.; Matsubara, Y.; Itow, Y.; ...

    2017-01-03

    The SciBar Cosmic Ray Telescope (SciCRT) is a massive scintillator tracker to observe cosmic rays at a very high-altitude environment in Mexico. The fully active tracker is based on the Scintillator Bar (SciBar) detector developed as a near detector for the KEK-to-Kamioka long-baseline neutrino oscillation experiment (K2K) in Japan. Since the data acquisition (DAQ) system was developed for the accelerator experiment, we determined to develop a new robust DAQ system to optimize it to our cosmic-ray experiment needs at the top of Mt. Sierra Negra (4600 m). One of our special requirements is to achieve a 10 times faster readoutmore » rate. We started to develop a new fast readout back-end board (BEB) based on 100 Mbps SiTCP, a hardware network processor developed for DAQ systems for high energy physics experiments. Then we developed the new BEB which has a potential of 20 times faster than the current one in the case of observing neutrons. Lastly, we installed the new DAQ system including the new BEBs to a part of the SciCRT in July 2015. The system has been operating since then. In this article, we describe the development, the basic performance of the new BEB, the status after the installation in the SciCRT, and the future performance.« less

  17. Event Recording Data Acquisition System and Experiment Data Management System for Neutron Experiments at MLF, J-PARC

    NASA Astrophysics Data System (ADS)

    Nakatani, T.; Inamura, Y.; Moriyama, K.; Ito, T.; Muto, S.; Otomo, T.

    Neutron scattering can be a powerful probe in the investigation of many phenomena in the materials and life sciences. The Materials and Life Science Experimental Facility (MLF) at the Japan Proton Accelerator Research Complex (J-PARC) is a leading center of experimental neutron science and boasts one of the most intense pulsed neutron sources in the world. The MLF currently has 18 experimental instruments in operation that support a wide variety of users from across a range of research fields. The instruments include optical elements, sample environment apparatus and detector systems that are controlled and monitored electronically throughout an experiment. Signals from these components and those from the neutron source are converted into a digital format by the data acquisition (DAQ) electronics and recorded as time-tagged event data in the DAQ computers using "DAQ-Middleware". Operating in event mode, the DAQ system produces extremely large data files (˜GB) under various measurement conditions. Simultaneously, the measurement meta-data indicating each measurement condition is recorded in XML format by the MLF control software framework "IROHA". These measurement event data and meta-data are collected in the MLF common storage and cataloged by the MLF Experimental Database (MLF EXP-DB) based on a commercial XML database. The system provides a web interface for users to manage and remotely analyze experimental data.

  18. The new Langley Research Center advanced real-time simulation (ARTS) system

    NASA Technical Reports Server (NTRS)

    Crawford, D. J.; Cleveland, J. I., II

    1986-01-01

    Based on a survey of current local area network technology with special attention paid to high bandwidth and very low transport delay requirements, NASA's Langley Research Center designed a new simulation subsystem using the computer automated measurement and control (CAMAC) network. This required significant modifications to the standard CAMAC system and development of a network switch, a clocking system, new conversion equipment, new consoles, supporting software, etc. This system is referred to as the advanced real-time simulation (ARTS) system. It is presently being built at LaRC. This paper provides a functional and physical description of the hardware and a functional description of the software. The requirements which drove the design are presented as well as present performance figures and status.

  19. The new CMS DAQ system for run-2 of the LHC

    DOE PAGES

    Bawej, Tomasz; Behrens, Ulf; Branson, James; ...

    2015-05-21

    The data acquisition (DAQ) system of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in ordermore » to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A Clos network based on 56 Gb/s FDR Infiniband has been chosen for the event builder with a throughput of ~ 4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. As a result, this paper presents the requirements, technical choices, and performance of the new system.« less

  20. Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    NASA Astrophysics Data System (ADS)

    Quinto, M.; Cafagna, F.; Fiergolski, A.; Radicioni, E.

    2013-11-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality.

  1. Inexpensive Data Acquisition with a Sound Card

    NASA Astrophysics Data System (ADS)

    Hassan, Umer; Pervaiz, Saad; Anwar, Muhammad Sabieh

    2011-12-01

    Signal generators, oscilloscopes, and data acquisition (DAQ) systems are standard components of the modern experimental physics laboratory. The sound card, a built-in component in the ubiquitous personal computer, can be utilized for all three of these tasks1,2 and offers an attractive option for labs in developing countries such as ours—Pakistan—where affordability is always of prime concern. In this paper, we describe in a recipe fashion how the sound card is used for DAQ and signal generation.

  2. Performance of a segmented HPGe detector at KRISS.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Lee, S H; Park, Tae Soon; Oh, J S

    2018-04-01

    A 24 segmented HPGe coaxial detector was set up with a digitized data acquisition system (DAQ). The DAQ was composed of a digitizer (5 × 10 7 sampling/s), a Field-Programmable Gate Array (FPGA), and a real time operating system. The Full Width Half Maximum (FWHM), rise time, signal characteristics, and spectra of a 137 Cs source were evaluated. The data were processed using an in-house developed gamma-ray tracking system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Developments of a new data acquisition system at ANNRI

    NASA Astrophysics Data System (ADS)

    Nakao, T.; Terada, K.; Kimura, A.; Nakamura, S.; Iwamoto, O.; Harada, H.; Katabuchi, T.; Igashira, M.; Hori, J.

    2017-09-01

    A new data acquisition system (DAQ system) has been developed at the Accurate Neutron-Nucleus Reaction Measurement Instrument (ANNRI) facility in the Japan Proton Accelerator Research Complex, Materials and Life Science Experimental Facility (J-PARC/MLF). DAQ systems for both the Ge detector system and the Li-glass detector system were tested by using a gold sample. The applicability of the time-of-flight method was checked. System performance was evaluated on the basis of digital conversion nonlinearity, energy resolution, multi-channel coincidence and dead time.

  4. Coincidence and covariance data acquisition in photoelectron and -ion spectroscopy. II. Analysis and applications

    NASA Astrophysics Data System (ADS)

    Mikosch, Jochen; Patchkovskii, Serguei

    2013-10-01

    We use an analytical theory of noisy Poisson processes, developed in the preceding companion publication, to compare coincidence and covariance measurement approaches in photoelectron and -ion spectroscopy. For non-unit detection efficiencies, coincidence data acquisition (DAQ) suffers from false coincidences. The rate of false coincidences grows quadratically with the rate of elementary ionization events. To minimize false coincidences for rare event outcomes, very low event rates may hence be required. Coincidence measurements exhibit high tolerance to noise introduced by unstable experimental conditions. Covariance DAQ on the other hand is free of systematic errors as long as stable experimental conditions are maintained. In the presence of noise, all channels in a covariance measurement become correlated. Under favourable conditions, covariance DAQ may allow orders of magnitude reduction in measurement times. Finally, we use experimental data for strong-field ionization of 1,3-butadiene to illustrate how fluctuations in experimental conditions can contaminate a covariance measurement, and how such contamination can be detected.

  5. The electronics and data acquisition system for the DarkSide-50 veto detectors

    NASA Astrophysics Data System (ADS)

    Agnes, P.; Agostino, L.; Albuquerque, I. F. M.; Alexander, T.; Alton, A. K.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Bottino, B.; Brigatti, A.; Brodsky, J.; Budano, F.; Bussino, S.; Cadeddu, M.; Cadoni, M.; Calaprice, F.; Canci, N.; Candela, A.; Cao, H.; Cariello, M.; Carlini, M.; Catalanotti, S.; Cavalcante, P.; Chepurnov, A.; Cocco, A. G.; Covone, G.; Crippa, L.; D'Angelo, D.; D'Incecco, M.; Davini, S.; De Cecco, S.; De Deo, M.; De Vincenzi, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Foster, G.; Franco, D.; Gabriele, F.; Galbiati, C.; Giganti, C.; Goretti, A. M.; Granato, F.; Grandi, L.; Gromov, M.; Guan, M.; Guardincerri, Y.; Hackett, B. R.; Herner, K. R.; Hungerford, E. V.; Ianni, Aldo; Ianni, Andrea; James, I.; Jollet, C.; Keeter, K.; Kendziora, C. L.; Kobychev, V.; Koh, G.; Korablev, D.; Korga, G.; Kubankin, A.; Li, X.; Lissia, M.; Lombardi, P.; Luitz, S.; Ma, Y.; Machulin, I. N.; Mandarano, A.; Mari, S. M.; Maricic, J.; Marini, L.; Martoff, C. J.; Meregaglia, A.; Meyers, P. D.; Miletic, T.; Milincic, R.; Montanari, D.; Monte, A.; Montuschi, M.; Monzani, M. E.; Mosteiro, P.; Mount, B. J.; Muratova, V. N.; Musico, P.; Napolitano, J.; Nelson, A.; Odrowski, S.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Pantic, E.; Parmeggiano, S.; Pelczar, K.; Pelliccia, N.; Pocar, A.; Pordes, S.; Pugachev, D. A.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A. L.; Riffard, Q.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Sablone, D.; Saggese, P.; Saldanha, R.; Sands, W.; Sangiorgio, S.; Savarese, C.; Segreto, E.; Semenov, D. A.; Shields, E.; Singh, P. N.; Skorokhvatov, M. D.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Trinchese, P.; Unzhakov, E. V.; Vishneva, A.; Vogelaar, R. B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A. W.; Westerdale, S.; Wilhelmi, J.; Wojcik, M. M.; Xiang, X.; Xu, J.; Yang, C.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhong, W.; Zhu, C.; Zuzel, G.

    2016-12-01

    DarkSide-50 is a detector for dark matter candidates in the form of weakly interacting massive particles. It utilizes a liquid argon time projection chamber for the inner main detector, surrounded by a liquid scintillator veto (LSV) and a water Cherenkov veto detector (WCV). The LSV and WCV act as the neutron and cosmogenic muon veto detectors for DarkSide-50. This paper describes the electronics and data acquisition system used for these two detectors. The system is made of a custom built front end electronics and commercial National Instruments high speed digitizers. The front end electronics, the DAQ, and the trigger system have been used to acquire data in the form of zero-suppressed waveform samples from the 110 PMTs of the LSV and the 80 PMTs of the WCV. The veto DAQ system has proven its performance and reliability. This electronics and DAQ system can be scaled and used as it is for the veto of the next generation DarkSide-20k detector.

  6. The NIFFTE Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Qu, Hai; Niffte Collaboration

    2011-10-01

    The Neutron Induced Fission Fragment Tracking Experiment (NIFFTE) will employ a novel, high granularity, pressurized Time Projection Chamber to measure fission cross-sections of the major actinides to high precision over a wide incident neutron energy range. These results will improve nuclear data accuracy and benefit the fuel cycle in the future. The NIFFTE data acquisition system (DAQ) has been designed and implemented on the prototype TPC. Lessons learned from engineering runs have been incorporated into some design changes that are being implemented before the next run cycle. A fully instrumented sextant of EtherDAQ cards (16 sectors, 496 channels) will be used for the next run cycle. The Maximum Integrated Data Acquisition System (MIDAS) has been chosen and customized to configure and run the experiment. It also meets the requirement for remote control and monitoring of the system. The integration of the MIDAS online database with the persistent PostgreSQL database has been implemented for experiment usage. The detailed design and current status of the DAQ system will be presented.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agnes, P.; Agostino, L.; Albuquerque, I. F. M.

    DarkSide-50 is a detector for dark matter candidates in the form of weakly interacting massive particles. It utilizes a liquid argon time projection chamber for the inner main detector, surrounded by a liquid scintillator veto (LSV) and a water Cherenkov veto detector (WCV). The LSV and WCV act as the neutron and cosmogenic muon veto detectors for DarkSide-50. This paper describes the electronics and data acquisition system used for these two detectors. The system is made of a custom built front end electronics and commercial National Instruments high speed digitizers. The front end electronics, the DAQ, and the trigger systemmore » have been used to acquire data in the form of zero-suppressed waveform samples from the 110 PMTs of the LSV and the 80 PMTs of the WCV. The veto DAQ system has proven its performance and reliability. This electronics and DAQ system can be scaled and used as it is for the veto of the next generation DarkSide-20k detector. Abstract (arXiv)« less

  8. A USB-2 based portable data acquisition system for detector development and nuclear research

    NASA Astrophysics Data System (ADS)

    Jiang, Hao; Ojaruega, M.; Becchetti, F. D.; Griffin, H. C.; Torres-Isea, R. O.

    2011-10-01

    A highly portable high-speed CAMAC data acquisition system has been developed using Kmax software (Sparrow, Inc.) for Macintosh laptop and tower computers. It uses a USB-2 interface to the CAMAC crate controller with custom-written software drivers. Kmax permits 2D parameter gating and specific algorithms have been developed to facilitate the rapid evaluation of various multi-element nuclear detectors for energy and time-of-flight measurements. This includes tests using neutrons from 252Cf and a 2.5 MeV neutron generator as well as standard gamma calibration sources such as 60Co and 137Cs. In addition, the system has been used to measure gamma-gamma coincidences over extended time periods using radioactive sources (e.g., Ra-228, Pa-233, Np-237, and Am-243).

  9. The ALICE DAQ infoLogger

    NASA Astrophysics Data System (ADS)

    Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Dénes, E.; Divià, R.; Fuchs, U.; Grigore, A.; Ionita, C.; Delort, C.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Von Haller, B.; Alice Collaboration

    2014-04-01

    ALICE (A Large Ion Collider Experiment) is a heavy-ion experiment studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE DAQ (Data Acquisition System) is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches). The DAQ reads the data transferred from the detectors through 500 dedicated optical links at an aggregated and sustained rate of up to 10 Gigabytes per second and stores at up to 2.5 Gigabytes per second. The infoLogger is the log system which collects centrally the messages issued by the thousands of processes running on the DAQ machines. It allows to report errors on the fly, and to keep a trace of runtime execution for later investigation. More than 500000 messages are stored every day in a MySQL database, in a structured table keeping track for each message of 16 indexing fields (e.g. time, host, user, ...). The total amount of logs for 2012 exceeds 75GB of data and 150 million rows. We present in this paper the architecture and implementation of this distributed logging system, consisting of a client programming API, local data collector processes, a central server, and interactive human interfaces. We review the operational experience during the 2012 run, in particular the actions taken to ensure shifters receive manageable and relevant content from the main log stream. Finally, we present the performance of this log system, and future evolutions.

  10. Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.

    2015-04-01

    The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.

  11. An OS9-UNIX data acquisition system with ECL readout

    NASA Astrophysics Data System (ADS)

    Ziem, P.; Beschorner, C.; Bohne, W.; Drescher, B.; Friese, T.; Kiehne, T.; Kluge, Ch.

    1996-02-01

    A new data acquisition system has been developed at the Hahn-Meitner-Institut to handle almost 550 parameters of nuclear physics experiments. The system combines a UNIX host running a portable data buffer router and a VME front-end based on the OS9 real time operating system. Different kinds of pulse analyzers are located in several CAMAC crates which are controlled by the VME system via a VICbus connection. Data readout is performed by means of an ECL daisy chain. Besides controlling CAMAC the main purpose of the VME front-end is event data formatting and histogramming. Using TCP/IP services, the UNIX host receives formatted data packages for data storage and display. During a beam time at the antiproton accelerator LEAR/CERN, the PS208 experiment has accumulated about 100 Gbyte of event data [2

  12. An OS9-UNIX data acquisition system with ECL readout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziem, P.; Beschorner, C.; Bohne, W.

    1996-02-01

    A new data acquisition system has been developed at the Hahn-Meitner-Institut to handle almost 550 parameters of nuclear physics experiments. The system combines a UNIX host running a portable data buffer router and a VME front-end based on the OS9 real time operating system. Different kinds of pulse analyzers are located in several CAMAC crates which are controlled by the VME system via a VICbus connection. Data readout is performed by means of an ECL daisy chain. Besides controlling CAMAC the main purpose of the VME front-end is event data formatting and histogramming. Using TCP/IP services, the UNIX host receivesmore » formatted data packages for data storage and display. During a beam time at the antiproton accelerator LEAR/CERN, the PS208 experiment has accumulated about 100 Gbyte of event data.« less

  13. Development of a portable electrical impedance tomography data acquisition system for near-real-time spatial sensing

    NASA Astrophysics Data System (ADS)

    Huang, Shieh-Kung; Loh, Kenneth J.

    2015-04-01

    The main goal of this study was to develop and validate the performance of a miniature and portable data acquisition (DAQ) system designed for interrogating carbon nanotube (CNT)-based thin films for real-time spatial structural sensing and damage detection. Previous research demonstrated that the electrical properties of CNT-based thin film strain sensors were linearly correlated with applied strains. When coupled with an electrical impedance tomography (EIT) algorithm, the detection and localization of damage was possible. In short, EIT required that the film or "sensing skin" be interrogated along its boundaries. Electrical current was injected across a pair of boundary electrodes, and voltage was simultaneously recorded along the remaining electrode pairs. This was performed multiple times to obtain a large dataset needed for solving the EIT spatial conductivity mapping inverse problem. However, one of the main limitations of this technique was the large amount of time required for data acquisition. In order to facilitate the adoption of this technology and for field implementation purposes, a miniature DAQ that could interrogate these CNT-based sensing skins at high sampling rates was designed and tested. The prototype DAQ featured a Howland current source that could generate stable and controlled direct current. Measurement of boundary electrode voltages and the switching of the input, output, and measurement channels were achieved using multiplexer units. The DAQ prototype was fabricated on a two-layer printed circuit board, and it was designed for integration with a prototype wireless sensing system, which is the next phase of this research.

  14. Intelligent FPGA Data Acquisition Framework

    NASA Astrophysics Data System (ADS)

    Bai, Yunpeng; Gaisbauer, Dominic; Huber, Stefan; Konorov, Igor; Levit, Dmytro; Steffen, Dominik; Paul, Stephan

    2017-06-01

    In this paper, we present the field programmable gate arrays (FPGA)-based framework intelligent FPGA data acquisition (IFDAQ), which is used for the development of DAQ systems for detectors in high-energy physics. The framework supports Xilinx FPGA and provides a collection of IP cores written in very high speed integrated circuit hardware description language, which use the common interconnect interface. The IP core library offers functionality required for the development of the full DAQ chain. The library consists of Serializer/Deserializer (SERDES)-based time-to-digital conversion channels, an interface to a multichannel 80-MS/s 10-b analog-digital conversion, data transmission, and synchronization protocol between FPGAs, event builder, and slow control. The functionality is distributed among FPGA modules built in the AMC form factor: front end and data concentrator. This modular design also helps to scale and adapt the DAQ system to the needs of the particular experiment. The first application of the IFDAQ framework is the upgrade of the read-out electronics for the drift chambers and the electromagnetic calorimeters (ECALs) of the COMPASS experiment at CERN. The framework will be presented and discussed in the context of this paper.

  15. Evaluation of a digital data acquisition system and optimization of n-γ discrimination for a compact neutron spectrometer.

    PubMed

    Giacomelli, L; Zimbal, A; Reginatto, M; Tittelmeier, K

    2011-01-01

    A compact NE213 liquid scintillation neutron spectrometer with a new digital data acquisition (DAQ) system is now in operation at the Physikalisch-Technische Bundesanstalt (PTB). With the DAQ system, developed by ENEA Frascati, neutron spectrometry with high count rates in the order of 5×10(5) s(-1) is possible, roughly an order of magnitude higher than with an analog acquisition system. To validate the DAQ system, a new data analysis code was developed and tests were done using measurements with 14-MeV neutrons made at the PTB accelerator. Additional analysis was carried out to optimize the two-gate method used for neutron and gamma (n-γ) discrimination. The best results were obtained with gates of 35 ns and 80 ns. This indicates that the fast and medium decay time components of the NE213 light emission are the ones that are relevant for n-γ discrimination with the digital acquisition system. This differs from what is normally implemented in the analog pulse shape discrimination modules, namely, the fast and long decay emissions of the scintillating light.

  16. Impulsiveness, and trait displaced aggression among drug using female sex traders

    PubMed Central

    Clingan, Sarah E.; Fisher, Dennis G.; Pedersen, William C.; Reynolds, Grace L.; Xandre, Pamela

    2016-01-01

    Objective This study compared women who sex trade for drugs, money, or both compared to neither (did not sex trade), and introduced the concept of trait displaced aggression to the literature on sex trading. Methods Female participants (n = 1055) were recruited from a low-income area of southern California. Measures included: the Risk Behavior Assessment (RBA), Barratt Impulsivity Scale (BIS), Eysenck Impulsiveness Scale (EIS), and the Displaced Aggression Questionnaire (DAQ). Results Women who traded sex for both drugs and money used crack cocaine, powder cocaine, and alcohol significantly more, scored higher on the BIS, and the EIS, and were significantly older. Those who only sex traded for drugs used more amphetamine, heroin, and injected drugs more days. They were also higher on the DAQ and all of the DAQ subscales. Those who traded for money only used marijuana more and were more likely to use marijuana before sex. Conclusions This study may help address specific issues unique to those who sex trade for different commodities in that the drugs used are different and the underlying personality characteristics are different. PMID:27082265

  17. Upgraded photon calorimeter with integrating readout for Hall A Compton Polarimeter at Jefferson Lab

    DOE PAGES

    Friend, M.; Parno, D.; Benmokhtar, F.; ...

    2012-06-01

    The photon arm of the Compton polarimeter in Hall A of Jefferson Lab has been upgraded to allow for electron beam polarization measurements with better than 1% accuracy. The data acquisition system (DAQ) now includes an integrating mode, which eliminates several systematic uncertainties inherent in the original counting-DAQ setup. The photon calorimeter has been replaced with a Ce-doped Gd 2SiO 5 crystal, which has a bright output and fast response, and works well for measurements using the new integrating method at electron beam energies from 1 to 6 GeV.

  18. Does Introducing Imprecision around Probabilities for Benefit and Harm Influence the Way People Value Treatments?

    PubMed

    Bansback, Nick; Harrison, Mark; Marra, Carlo

    2016-05-01

    Imprecision in estimates of benefits and harms around treatment choices is rarely described to patients. Variation in sampling error between treatment alternatives (e.g., treatments have similar average risks, but one treatment has a larger confidence interval) can result in patients failing to choose the option that is best for them. The aim of this study is to use a discrete choice experiment to describe how 2 methods for conveying imprecision in risk influence people's treatment decisions. We randomized a representative sample of the Canadian general population to 1 of 3 surveys that sought choices between hypothetical treatments for rheumatoid arthritis based on different levels of 7 attributes: route and frequency of administration, chance of benefit, serious and minor side effects and life expectancy, and imprecision in benefit and side-effect estimates. The surveys differed in the way imprecision was described: 1) no imprecision, 2) quantitative description based on a range with a visual graphic, and 3) qualitative description simply describing the confidence in the evidence. The analyzed data were from 2663 respondents. Results suggested that more people understood imprecision when it was described qualitatively (88%) versus quantitatively (68%). Respondents who appeared to understand imprecision descriptions placed high value on increased precision regarding the actual benefits and harms of treatment, equivalent to the value placed on the information about the probability of serious side effects. Both qualitative and quantitative methods led to small but significant increases in decision uncertainty for choosing any treatment. Limitations included some issues in defining understanding of imprecision and the use of an internet survey of panel members. These findings provide insight into how conveying imprecision information influences patient treatment choices. © The Author(s) 2015.

  19. A system for the automated data-acquisition of fast transient signals in excitable membranes.

    PubMed

    Bustamante, J O

    1988-01-01

    This paper provides a description of a system for the acquisition of fast transient currents flowing across excitable membranes. The front end of the system consists of a CAMAC crate with plug-in modules. The modules provide control of CAMAC operations, analog to digital conversion, electronic memory storage and timing of events. The signals are transferred under direct memory access to an IBM PC microcomputer through a special-purpose interface. Voltage levels from a digital to analog board in the microcomputer are passed through multiplexers to produce the desired voltage pulse patterns to elicit the transmembrane currents. The dead time between consecutive excitatory voltage pulses is limited only by the computer data bus and the software characteristics. The dead time between data transfers can be reduced to the order of milliseconds, which is sufficient for most experiments with transmembrane ionic currents.

  20. Development of a data acquisition system using a RISC/UNIX TM workstation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Y.; Tanimori, T.; Yasu, Y.

    1993-05-01

    We have developed a compact data acquisition system on RISC/UNIX workstations. A SUN TM SPARCstation TM IPC was used, in which an extension bus "SBus TM" was linked to a VMEbus. The transfer rate achieved was better than 7 Mbyte/s between the VMEbus and the SUN. A device driver for CAMAC was developed in order to realize an interruptive feature in UNIX. In addition, list processing has been incorporated in order to keep the high priority of the data handling process in UNIX. The successful developments of both device driver and list processing have made it possible to realize the good real-time feature on the RISC/UNIX system. Based on this architecture, a portable and versatile data taking system has been developed, which consists of a graphical user interface, I/O handler, user analysis process, process manager and a CAMAC device driver.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. E. Lawson, R. Marsala, S. Ramakrishnan, X. Zhao, P. Sichta

    In order to provide improved and expanded experimental capabilities, the existing Transrex power supplies at PPPL are to be upgraded and modernized. Each of the 39 power supplies consists of two six pulse silicon controlled rectifier sections forming a twelve pulse power supply. The first modification is to split each supply into two independent six pulse supplies by replacing the existing obsolete twelve pulse firing generator with two commercially available six pulse firing generators. The second change replaces the existing control link with a faster system, with greater capacity, which will allow for independent control of all 78 power supplymore » sections. The third change replaces the existing Computer Automated Measurement and Control (CAMAC) based fault detector with an Experimental Physics and Industrial Control System (EPICS) compatible unit, eliminating the obsolete CAMAC modules. Finally the remaining relay logic and interfaces to the "Hardwired Control System" will be replaces with a Programmable Logic Controller (PLC).« less

  2. Multi-channel pre-beamformed data acquisition system for research on advanced ultrasound imaging methods.

    PubMed

    Cheung, Chris C P; Yu, Alfred C H; Salimi, Nazila; Yiu, Billy Y S; Tsang, Ivan K H; Kerby, Benjamin; Azar, Reza Zahiri; Dickie, Kris

    2012-02-01

    The lack of open access to the pre-beamformed data of an ultrasound scanner has limited the research of novel imaging methods to a few privileged laboratories. To address this need, we have developed a pre-beamformed data acquisition (DAQ) system that can collect data over 128 array elements in parallel from the Ultrasonix series of research-purpose ultrasound scanners. Our DAQ system comprises three system-level blocks: 1) a connector board that interfaces with the array probe and the scanner through a probe connector port; 2) a main board that triggers DAQ and controls data transfer to a computer; and 3) four receiver boards that are each responsible for acquiring 32 channels of digitized raw data and storing them to the on-board memory. This system can acquire pre-beamformed data with 12-bit resolution when using a 40-MHz sampling rate. It houses a 16 GB RAM buffer that is sufficient to store 128 channels of pre-beamformed data for 8000 to 25 000 transmit firings, depending on imaging depth; corresponding to nearly a 2-s period in typical imaging setups. Following the acquisition, the data can be transferred through a USB 2.0 link to a computer for offline processing and analysis. To evaluate the feasibility of using the DAQ system for advanced imaging research, two proof-of-concept investigations have been conducted on beamforming and plane-wave B-flow imaging. Results show that adaptive beamforming algorithms such as the minimum variance approach can generate sharper images of a wire cross-section whose diameter is equal to the imaging wavelength (150 μm in our example). Also, planewave B-flow imaging can provide more consistent visualization of blood speckle movement given the higher temporal resolution of this imaging approach (2500 fps in our example).

  3. Digital Electronics for Nuclear Physics Experiments

    NASA Astrophysics Data System (ADS)

    Skulski, Wojtek; Hunter, David; Druszkiewicz, Eryk; Khaitan, Dev Ashish; Yin, Jun; Wolfs, Frank; SkuTek Instrumentation Team; Department of Physics; Astronomy, University of Rochester Team

    2015-10-01

    Future detectors in nuclear physics will use signal sampling as one of primary techniques of data acquisition. Using the digitized waveforms, the electronics can select events based on pulse shape, total energy, multiplicity, and the hit pattern. The DAQ for the LZ Dark Matter detector, now under development in Rochester, is a good example of the power of digital signal processing. This system, designed around 32-channel, FPGA-based, digital signal processors collects data from more than one thousand channels. The solutions developed for this DAQ can be applied to nuclear physics experiments. Supported by the Department of Energy Office of Science under Grant DE-SC0009543.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J.M.; et al.

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of to the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performancemore » of the event-building system.« less

  5. Data Acquisition System for Silicon Ultra Fast Cameras for Electron and Gamma Sources in Medical Applications (sucima Imager)

    NASA Astrophysics Data System (ADS)

    Czermak, A.; Zalewska, A.; Dulny, B.; Sowicki, B.; Jastrząb, M.; Nowak, L.

    2004-07-01

    The needs for real time monitoring of the hadrontherapy beam intensity and profile as well as requirements for the fast dosimetry using Monolithic Active Pixel Sensors (MAPS) forced the SUCIMA collaboration to the design of the unique Data Acquisition System (DAQ SUCIMA Imager). The DAQ system has been developed on one of the most advanced XILINX Field Programmable Gate Array chip - VERTEX II. The dedicated multifunctional electronic board for the detector's analogue signals capture, their parallel digital processing and final data compression as well as transmission through the high speed USB 2.0 port has been prototyped and tested.

  6. Software engineering techniques and CASE tools in RD13

    NASA Astrophysics Data System (ADS)

    Buono, S.; Gaponenko, I.; Jones, R.; Khodabandeh, A.; Mapelli, L.; Mornacchi, G.; Prigent, D.; Sanchez-Corral, E.; Skiadelli, M.; Toppers, A.; Duval, P. Y.; Ferrato, D.; Le Van Suu, A.; Qian, Z.; Rondot, C.; Ambrosini, G.; Fumagalli, G.; Polesello, G.; Aguer, M.; Huet, M.

    1994-12-01

    The RD13 project was approved in April 1991 for the development of a scalable data-taking system suitable for hosting various LHC studies. One of its goals is the exploitation of software engineering techniques, in order to indicate their overall suitability for data acquisition (DAQ), software design and implementation. This paper describes how such techniques have been applied to the development of components of the RD13 DAQ used in test-beam runs at CERN. We describe our experience with the Artifex CASE tool and its associated methodology. The issues raised when code generated by a CASE tool has to be integrated into an existing environment are also discussed.

  7. Commissioning and initial experience with the ALICE on-line

    NASA Astrophysics Data System (ADS)

    Altini, V.; Anticic, T.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Dénes, E.; Divià, R.; Fuchs, U.; Kiss, T.; Makhlyueva, I.; Roukoutakis, F.; Schossmaier, K.; Soós, C.; Vande Vyvre, P.; von Haller, B.; ALICE Collaboration

    2010-04-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time available per year for heavy ions and to accommodate very different requirements originated from the 18 sub-detectors. This paper will present the large scale tests conducted to assess the standalone DAQ performances, the interfaces with the other online systems and the extensive commissioning performed in order to be fully prepared for physics data taking. It will review the experience accumulated since May 2007 during the standalone commissioning of the main detectors and the global cosmic runs and the lessons learned from this exposure on the "battle field". It will also discuss the test protocol followed to integrate and validate each sub-detector with the online systems and it will conclude with the first results of the LHC injection tests and startup in September 2008. Several papers of the same conference present in more details some elements of the ALICE DAQ system.

  8. The LUX experiment - trigger and data acquisition systems

    NASA Astrophysics Data System (ADS)

    Druszkiewicz, Eryk

    2013-04-01

    The Large Underground Xenon (LUX) detector is a two-phase xenon time projection chamber designed to detect interactions of dark matter particles with the xenon nuclei. Signals from the detector PMTs are processed by custom-built analog electronics which provide properly shaped signals for the trigger and data acquisition (DAQ) systems. During calibrations, both systems must be able to handle high rates and have large dynamic ranges; during dark matter searches, maximum sensitivity requires low thresholds. The trigger system uses eight-channel 64-MHz digitizers (DDC-8) connected to a Trigger Builder (TB). The FPGA cores on the digitizers perform real-time pulse identification (discriminating between S1 and S2-like signals) and event localization. The TB uses hit patterns, hit maps, and maximum response detection to make trigger decisions, which are reached within few microseconds after the occurrence of an event of interest. The DAQ system is comprised of commercial digitizers with customized firmware. Its real-time baseline suppression allows for a maximum event acquisition rate in excess of 1.5 kHz, which results in virtually no deadtime. The performance of the trigger and DAQ systems during the commissioning runs of LUX will be discussed.

  9. USB 3.0 readout and time-walk correction method for Timepix3 detector

    NASA Astrophysics Data System (ADS)

    Turecek, D.; Jakubek, J.; Soukup, P.

    2016-12-01

    The hybrid particle counting pixel detectors of Medipix family are well known. In this contribution we present new USB 3.0 based interface AdvaDAQ for Timepix3 detector. The AdvaDAQ interface is designed with a maximal emphasis to the flexibility. It is successor of FitPIX interface developed in IEAP CTU in Prague. Its modular architecture supports all Medipix/Timepix chips and all their different readout modes: Medipix2, Timepix (serial and parallel), Medipix3 and Timepix3. The high bandwidth of USB 3.0 permits readout of 1700 full frames per second with Timepix or 8 channel data acquisition from Timepix3 at frequency of 320 MHz. The control and data acquisition is integrated in a multiplatform PiXet software (MS Windows, Mac OS, Linux). In the second part of the publication a new method for correction of the time-walk effect in Timepix3 is described. Moreover, a fully spectroscopic X-ray imaging with Timepix3 detector operated in the ToT mode (Time-over-Threshold) is presented. It is shown that the AdvaDAQ's readout speed is sufficient to perform spectroscopic measurement at full intensity of radiographic setups equipped with nano- or micro-focus X-ray tubes.

  10. Workshop on data acquisition and trigger system simulations for high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1992-12-31

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- anmore » Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.« less

  11. Imprecise results: Utilizing partial computations in real-time systems

    NASA Technical Reports Server (NTRS)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  12. Automatically Assessing Graph-Based Diagrams

    ERIC Educational Resources Information Center

    Thomas, Pete; Smith, Neil; Waugh, Kevin

    2008-01-01

    To date there has been very little work on the machine understanding of imprecise diagrams, such as diagrams drawn by students in response to assessment questions. Imprecise diagrams exhibit faults such as missing, extraneous and incorrectly formed elements. The semantics of imprecise diagrams are difficult to determine. While there have been…

  13. Performance of the CMS Event Builder

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. F.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Zejdl, P.

    2017-10-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of {\\mathscr{O}}(100 {{GB}}/{{s}}) to the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performance of the event-building system.

  14. A programmable ISA to USB interface

    NASA Astrophysics Data System (ADS)

    Ribas, R. V.

    2013-05-01

    A programmable device to access and control ISA-standard camac instrumentation and interfacing it to the USB port of computers, is described in this article. With local processing capabilities and event buffering before sending data to the computer, the new acquisition system become much more efficient.

  15. Multidetector system for nanosecond tagged neutron technology based on hardware selection of events

    NASA Astrophysics Data System (ADS)

    Karetnikov, M. D.; Korotkov, S. A.; Khasaev, T. O.

    2016-09-01

    At the T( d, n)He4 reaction a neutron is accompanied by an associated alpha-particle emitted in the opposite direction. A time and a direction of the neutron escape can be determined by measuring a time and coordinates of the alpha particle at the position-sensitive alpha-detector. The nanosecond tagged neutron technology (NTNT) based on this principle has great potentialities for various applications, e.g., for remote detection of explosives. A spectrum of gamma-rays emitted at the interaction of tagged neutrons with nuclei of chemical elements allows identify a chemical composition of an irradiated object. For practical realization of NTNT, a time resolution of recording the alpha-gamma coincidences should be close to 1 ns. The total intensity of signals can exceed 1 × 106 1/s from all gamma-detectors and 7 × 106 1/s from the alpha-detector. The processing of such stream of data without losses and distortion of information is one of challenging problems of NTNT. Several models of analog DAQ system based on hardware selection of events were devised and their characteristics are examined. The comparison with the digital DAQ systems demonstrated that the analog DAQ provides better timing parameters, lower power consumption, and higher maximum rate of useful events.

  16. Timing and Pulse Shape Discrimination Comparison Against Legacy TDC & QDC and the JLab F250 FADC

    NASA Astrophysics Data System (ADS)

    Milkeris-Zellar, Tyler; Sawatzky, Brad

    2017-09-01

    The F250 Flash Analog to Digital Convertor (FADC) is a relatively new module used in Data Acquisition Systems (DAQ) at Jefferson Lab. The FADC will replace or supplement older DAQ modules like Time to Digital Converters (TDCs) and Charge Analog to Digital Converters (QDCs). The TDC has a certain known timing resolution and the QDC can integrate a pulse's charge, a feature which can also be used for particle identification between photons and neutrons using pulse shape discrimination (PSD). The focus of this project is developing a test stand to study timing and PSD performance of legacy modules TDC and QDC, and the new F250 FADC. A cosmic telescope was used to extract timing resolution from the TDC and FADC. Through PSD with the QDC and FADC, using a liquid scintillator, we plan to identify photons and neutrons from an americium-beryllium (AmBe) source. Through PSD, we found that the FADC allows for flexible data analysis compared to the QDC. The results indicate that the TDC provides a more accurate measurement of timing resolution than the FADC. This improvement allows for a clear distinction of what module to use when wanting precision of measurement in a DAQ for a cosmic ray telescope. NSF.

  17. A Distributed Data Acquisition System for the Sensor Network of the TAWARA_RTM Project

    NASA Astrophysics Data System (ADS)

    Fontana, Cristiano Lino; Donati, Massimiliano; Cester, Davide; Fanucci, Luca; Iovene, Alessandro; Swiderski, Lukasz; Moretto, Sandra; Moszynski, Marek; Olejnik, Anna; Ruiu, Alessio; Stevanato, Luca; Batsch, Tadeusz; Tintori, Carlo; Lunardon, Marcello

    This paper describes a distributed Data Acquisition System (DAQ) developed for the TAWARA_RTM project (TAp WAter RAdioactivity Real Time Monitor). The aim is detecting the presence of radioactive contaminants in drinking water; in order to prevent deliberate or accidental threats. Employing a set of detectors, it is possible to detect alpha, beta and gamma radiations, from emitters dissolved in water. The Sensor Network (SN) consists of several heterogeneous nodes controlled by a centralized server. The SN cyber-security is guaranteed in order to protect it from external intrusions and malicious acts. The nodes were installed in different locations, along the water treatment processes, in the waterworks plant supplying the aqueduct of Warsaw, Poland. Embedded computers control the simpler nodes, and are directly connected to the SN. Local-PCs (LPCs) control the more complex nodes that consist signal digitizers acquiring data from several detectors. The DAQ in the LPC is split in several processes communicating with sockets in a local sub-network. Each process is dedicated to a very simple task (e.g. data acquisition, data analysis, hydraulics management) in order to have a flexible and fault-tolerant system. The main SN and the local DAQ networks are separated by data routers to ensure the cyber-security.

  18. Performance analysis of OOK-based FSO systems in Gamma-Gamma turbulence with imprecise channel models

    NASA Astrophysics Data System (ADS)

    Feng, Jianfeng; Zhao, Xiaohui

    2017-11-01

    For an FSO communication system with imprecise channel model, we investigate its system performance based on outage probability, average BEP and ergodic capacity. The exact FSO links are modeled as Gamma-Gamma fading channel in consideration of both atmospheric turbulence and pointing errors, and the imprecise channel model is treated as the superposition of exact channel gain and a Gaussian random variable. After we derive the PDF, CDF and nth moment of the imprecise channel gain, and based on these statistics the expressions for the outage probability, the average BEP and the ergodic capacity in terms of the Meijer's G functions are obtained. Both numerical and analytical results are presented. The simulation results show that the communication performance deteriorates in the imprecise channel model, and approaches to the exact performance curves as the channel model becomes accurate.

  19. Developments and applications of DAQ framework DABC v2

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, J.; Kurz, N.; Linev, S.

    2015-12-01

    The Data Acquisition Backbone Core (DABC) is a software framework for distributed data acquisition. In 2013 Version 2 of DABC has been released with several improvements. For monitoring and control, an HTTP web server and a proprietary command channel socket have been provided. Web browser GUIs have been implemented for configuration and control of DABC and MBS DAQ nodes via such HTTP server. Several specific plug-ins, for example interfacing PEXOR/KINPEX optical readout PCIe boards, or HADES trbnet input and hld file output, have been further developed. In 2014, DABC v2 was applied for production data taking of the HADES collaboration's pion beam time at GSI. It fully replaced the functionality of the previous event builder software and added new features concerning online monitoring.

  20. A FADC-Based Data Acquisition System for the KASCADE-Grande Experiment

    NASA Astrophysics Data System (ADS)

    Walkowiak, W.; Antoni, T.; Apel, W. D.; Badea, F.; Bekk, K.; Bercuci, A.; Bertaina, M.; Blumer, H.; Bozdog, H.; Brancus, I. M.; Bruggemann, M.; Buchholz, P.; Buttner, C.; Chiavassa, A.; Daumiller, K.; Dipierro, F.; Doll, P.; Engel, R.; Engler, J.; Febler, F.; Ghia, P. L.; Gils, H. J.; Glasstetter, R.; Haungs, A.; Heck, D.; Horandel, J. R.; Kampert, K.-H.; Klages, H. O.; Kolotaev, Y.; Maier, G.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Muller, M.; Navarra, G.; Obenland, R.; Oehlschlager, J.; Ostapchenko, S.; Over, S.; Petcu, M.; Plewnia, S.; Rebel, H.; Risse, A.; Roth, M.; Schieler, H.; Scholz, J.; Stumpert, M.; Thouw, T.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Valchierotti, S.; Vanburen, J.; Weindl, A.; Wochele, J.; Zabierowski, J.; Zagromski, S.; Zimmermann, D.

    2006-02-01

    We present the design and first test results of a new FADC-based data acquisition (DAQ) system for the Grande array of the KASCADE-Grande experiment. The original KASCADE experiment at the Forschungszentrum Karlsruhe, Germany, has been extended by 37 detector stations of the former EAS-TOP experiment (Grande array)to provide sensitivity to energies of primary particles from the cosmos of up to $10^{18}$ eV. The new FADC-based DAQ system will improve the quality of the data taken by the Grande array by digitizing the scintillator signals with a 250 MHz sampling rate. events per second. Two Grande stations have been equipped with the FADC-based data acquisition system and first data are shown.

  1. A status report of a FASTBUS at KEK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arai, Y.; Endo, I.; Inoue

    1983-02-01

    Some FASTBUS modules have been produced and successfully tested at KEK. The test system consisted of a single backplane segment equipped with ancillary logic, two masters driven by the MC68000 microprocessor and two slaves which have several read/write registers. A simple FASTBUS-CAMAC interface is also described.

  2. A potent approach for the development of FPGA based DAQ system for HEP experiments

    NASA Astrophysics Data System (ADS)

    Khan, Shuaib Ahmad; Mitra, Jubin; David, Erno; Kiss, Tivadar; Nayak, Tapan Kumar

    2017-10-01

    With ever increasing particle beam energies and interaction rates in modern High Energy Physics (HEP) experiments in the present and future accelerator facilities, there has always been the demand for robust Data Acquisition (DAQ) schemes which perform in the harsh radiation environment and handle high data volume. The scheme is required to be flexible enough to adapt to the demands of future detector and electronics upgrades, and at the same time keeping the cost factor in mind. To address these challenges, in the present work, we discuss an efficient DAQ scheme for error resilient, high speed data communication on commercially available state-of-the-art FPGA with optical links. The scheme utilises GigaBit Transceiver (GBT) protocol to establish radiation tolerant communication link between on-detector front-end electronics situated in harsh radiation environment to the back-end Data Processing Unit (DPU) placed in a low radiation zone. The acquired data are reconstructed in DPU which reduces the data volume significantly, and then transmitted to the computing farms through high speed optical links using 10 Gigabit Ethernet (10GbE). In this study, we focus on implementation and testing of GBT protocol and 10GbE links on an Intel FPGA. Results of the measurements of resource utilisation, critical path delays, signal integrity, eye diagram and Bit Error Rate (BER) are presented, which are the indicators for efficient system performance.

  3. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    NASA Astrophysics Data System (ADS)

    Claus, R.; ATLAS Collaboration

    2016-07-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  4. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    NASA Astrophysics Data System (ADS)

    Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R. T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A. J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Yildiz, S. C.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.

  5. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    DOE PAGES

    Bartoldus, R.; Claus, R.; Garelli, N.; ...

    2016-01-25

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all ofmore » these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. In conclusion, we will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.« less

  6. The Front-End System For MARE In Milano

    NASA Astrophysics Data System (ADS)

    Arnaboldi, Claudio; Pessina, Gianluigi

    2009-12-01

    The first phase of MARE consists of 72 μ-bolometers composed each of a crystal of AgReO4 readout by Si thermistors. The spread in the thermistor characteristics and bolometer thermal coupling leads to different energy conversion gains and optimum operating points of the detectors. Detector biasing levels and voltage gains are completely remote-adjustable by the front end system developed, the subject of this paper, achieving the same signal range at the input of the DAQ system. The front end consists of a cold buffer stage, a second pseudo differential stage followed by a gain stage, an antialiasing filter, and a battery powered detector biasing set up. The DAQ system can be used to set all necessary parameters of the electronics remotely, by writing to a μ-controller located on each board. Fiber optics are used for the serial communication between the DAQ and the front end. To suppress interference noise during normal operation, the clocked devices of the front end are maintained in sleep-mode, except during the set-up phase of the experiment. An automatic DC detector characterization procedure is used to establish the optimum operating point of every detector of the array. A very low noise level has been achieved: about 3nV/□Hz at 1 Hz and 1 nV/□Hz for the white component, high frequencies.

  7. Evaluation of the Technicon Axon analyser.

    PubMed

    Martínez, C; Márquez, M; Cortés, M; Mercé, J; Rodriguez, J; González, F

    1990-01-01

    An evaluation of the Technicon Axon analyser was carried out following the guidelines of the 'Sociedad Española de Química Clínica' and the European Committee for Clinical Laboratory Standards.A photometric study revealed acceptable results at both 340 nm and 404 nm. Inaccuracy and imprecision were lower at 404 nm than at 340 nm, although poor dispersion was found at both wavelengths, even at low absorbances. Drift was negligible, the imprecision of the sample pipette delivery system was greater for small sample volumes, the reagent pipette delivery system imprecision was acceptable and the sample diluting system study showed good precision and accuracy.Twelve analytes were studied for evaluation of the analyser under routine working conditions. Satisfactory results were obtained for within-run imprecision, while coefficients of variation for betweenrun imprecision were much greater than expected. Neither specimenrelated nor specimen-independent contamination was found in the carry-over study. For all analytes assayed, when comparing patient sample results with those obtained in a Hitachi 737 analyser, acceptable relative inaccuracy was observed.

  8. A direct-to-drive neural data acquisition system.

    PubMed

    Kinney, Justin P; Bernstein, Jacob G; Meyer, Andrew J; Barber, Jessica B; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T; Kopell, Nancy J; Boyden, Edward S

    2015-01-01

    Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future.

  9. A direct-to-drive neural data acquisition system

    PubMed Central

    Kinney, Justin P.; Bernstein, Jacob G.; Meyer, Andrew J.; Barber, Jessica B.; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T.; Kopell, Nancy J.; Boyden, Edward S.

    2015-01-01

    Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future. PMID:26388740

  10. Characterization of a 16-Bit Digitizer for Lidar Data Acquisition

    NASA Technical Reports Server (NTRS)

    Williamson, Cynthia K.; DeYoung, Russell J.

    2000-01-01

    A 6-MHz 16-bit waveform digitizer was evaluated for use in atmospheric differential absorption lidar (DIAL) measurements of ozone. The digitizer noise characteristics were evaluated, and actual ozone DIAL atmospheric returns were digitized. This digitizer could replace computer-automated measurement and control (CAMAC)-based commercial digitizers and improve voltage accuracy.

  11. Multinode data acquisition and control system for the 4-element TACTIC telescope array

    NASA Astrophysics Data System (ADS)

    Yadav, K. K.; Chouhan, N.; Kaul, S. R.; Koul, R.

    2002-03-01

    An interrupt driven multinode data acquisition and control system has been developed for the 4-element gamma-ray telescope array, TACTIC. Computer networking technology and the CAMAC bus have been integrated to develop this icon-based, userfriendly failsafe system. The paper describes the salient features of the system.

  12. A distributed control system for the lower-hybrid current drive system on the Tokamak de Varennes

    NASA Astrophysics Data System (ADS)

    Bagdoo, J.; Guay, J. M.; Chaudron, G.-A.; Decoste, R.; Demers, Y.; Hubbard, A.

    1990-08-01

    An rf current drive system with an output power of 1 MW at 3.7 GHz is under development for the Tokamak de Varennes. The control system is based on an Ethernet local-area network of programmable logic controllers as front end, personal computers as consoles, and CAMAC-based DSP processors. The DSP processors ensure the PID control of the phase and rf power of each klystron, and the fast protection of high-power rf hardware, all within a 40 μs loop. Slower control and protection, event sequencing and the run-time database are provided by the programmable logic controllers, which communicate, via the LAN, with the consoles. The latter run a commercial process-control console software. The LAN protocol respects the first four layers of the ISO/OSI 802.3 standard. Synchronization with the tokamak control system is provided by commercially available CAMAC timing modules which trigger shot-related events and reference waveform generators. A detailed description of each subsystem and a performance evaluation of the system will be presented.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKisson, John

    The source code for the Java Data Acquisition suite provides interfaces to the JLab built USB FPGA ADC across a LAN network. Each jDaq node provides ListMode data from JLab built detector systems and readouts.

  14. Imprecise Probability Methods for Weapons UQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Vander Wiel, Scott Alan

    Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.

  15. An imprecise probability approach for squeal instability analysis based on evidence theory

    NASA Astrophysics Data System (ADS)

    Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie

    2017-01-01

    An imprecise probability approach based on evidence theory is proposed for squeal instability analysis of uncertain disc brakes in this paper. First, the squeal instability of the finite element (FE) model of a disc brake is investigated and its dominant unstable eigenvalue is detected by running two typical numerical simulations, i.e., complex eigenvalue analysis (CEA) and transient dynamical analysis. Next, the uncertainty mainly caused by contact and friction is taken into account and some key parameters of the brake are described as uncertain parameters. All these uncertain parameters are usually involved with imprecise data such as incomplete information and conflict information. Finally, a squeal instability analysis model considering imprecise uncertainty is established by integrating evidence theory, Taylor expansion, subinterval analysis and surrogate model. In the proposed analysis model, the uncertain parameters with imprecise data are treated as evidence variables, and the belief measure and plausibility measure are employed to evaluate system squeal instability. The effectiveness of the proposed approach is demonstrated by numerical examples and some interesting observations and conclusions are summarized from the analyses and discussions. The proposed approach is generally limited to the squeal problems without too many investigated parameters. It can be considered as a potential method for squeal instability analysis, which will act as the first step to reduce squeal noise of uncertain brakes with imprecise information.

  16. The Argonne CDF Group

    Science.gov Websites

    calorimeter, Shower Max., Preshower, Crack Chambers (1979-present) Run II Upgrade: Front end electronics (QIE , Preshower electronics and DAQ Support for Level-2 electron and photon triggers (RECES and ISO) Deputy Head

  17. Attention Deficit Hyperactivity Disorder, Aggression, and Illicit Stimulant Use: Is This Self-Medication?

    PubMed

    Odell, Annie P; Reynolds, Grace L; Fisher, Dennis G; Huckabay, Loucine M; Pedersen, William C; Xandre, Pamela; Miočević, Milica

    2017-05-01

    This study compares adults with and without attention deficit hyperactivity disorder (ADHD) on measures of direct and displaced aggression and illicit drug use. Three hundred ninety-six adults were administered the Wender Utah Rating Scale, the Risk Behavior Assessment, the Aggression Questionnaire (AQ), and the Displaced Aggression Questionnaire (DAQ). Those with ADHD were higher on all scales of the AQ and DAQ, were younger at first use of amphetamines, and were more likely to have ever used crack and amphetamines. A Structural Equation Model found a significant interaction in that for those with medium and high levels of verbal aggression, ADHD predicts crack and amphetamine. Follow-up logistic regression models suggest that blacks self-medicate with crack and whites and Hispanics self-medicate with amphetamine when they have ADHD and verbal aggression.

  18. The design and application of virtual ion meter based on LABVIEW 8.0.

    PubMed

    Meng, Hu; Li, Jiangyuan; Tang, Yonghuai

    2009-08-01

    The virtual ion meter is developed based on LABVIEW 8.0 by homemade adjusting circuit, data acquisition (DAQ) board, and computer. This note provides details of the structure of testing system and flow chart of DAQ program. This virtual instrument system is applied to multitask testing such as determining rate constant of second-order reaction by pX, pX potentiometric titration, determining oscillating reaction by potential, etc. The result of application indicates that this test system not only has function of real-time data acquiring, displaying, storage, but also realizes remote monitoring and controlling test-control spots through internet, automatic analyzing and processing of data, reporting of result according to the different testing task; moreover, the veracity and repeatability of data processing result are higher than the results of manual data processing.

  19. The ATLAS Data Acquisition System: from Run 1 to Run 2

    NASA Astrophysics Data System (ADS)

    Panduro Vazquez, William; ATLAS Collaboration

    2016-04-01

    The experience gained during the first period of very successful data taking of the ATLAS experiment (Run 1) has inspired a number of ideas for improvement of the Data Acquisition (DAQ) system that are being put in place during the so-called Long Shutdown 1 of the Large Hadron Collider (LHC), in 2013/14. We have updated the data-flow architecture, rewritten an important fraction of the software and replaced hardware, profiting from state of the art technologies. This paper summarizes the main changes that have been applied to the ATLAS DAQ system and highlights the expected performance and functional improvements that will be available for the LHC Run 2. Particular emphasis will be put on explaining the reasons for our architectural and technical choices, as well as on the simulation and testing approach used to validate this system.

  20. High-Speed Data Acquisition and Digital Signal Processing System for PET Imaging Techniques Applied to Mammography

    NASA Astrophysics Data System (ADS)

    Martinez, J. D.; Benlloch, J. M.; Cerda, J.; Lerche, Ch. W.; Pavon, N.; Sebastia, A.

    2004-06-01

    This paper is framed into the Positron Emission Mammography (PEM) project, whose aim is to develop an innovative gamma ray sensor for early breast cancer diagnosis. Currently, breast cancer is detected using low-energy X-ray screening. However, functional imaging techniques such as PET/FDG could be employed to detect breast cancer and track disease changes with greater sensitivity. Furthermore, a small and less expensive PET camera can be utilized minimizing main problems of whole body PET. To accomplish these objectives, we are developing a new gamma ray sensor based on a newly released photodetector. However, a dedicated PEM detector requires an adequate data acquisition (DAQ) and processing system. The characterization of gamma events needs a free-running analog-to-digital converter (ADC) with sampling rates of more than 50 Ms/s and must achieve event count rates up to 10 MHz. Moreover, comprehensive data processing must be carried out to obtain event parameters necessary for performing the image reconstruction. A new generation digital signal processor (DSP) has been used to comply with these requirements. This device enables us to manage the DAQ system at up to 80 Ms/s and to execute intensive calculi over the detector signals. This paper describes our designed DAQ and processing architecture whose main features are: very high-speed data conversion, multichannel synchronized acquisition with zero dead time, a digital triggering scheme, and high throughput of data with an extensive optimization of the signal processing algorithms.

  1. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    PubMed

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  2. Personnel

    Science.gov Websites

    486-7162 70A-2255A 70A2255 manager for DAQ Hardware Gerald Przybylski Electronics Engineer 510 486 subsystem. Thorsten Stezelberger Electronics Engineer 510 495-2489 50A-6141A 50R5008 Designing and fixing

  3. The hybrid UNIX controller for real-time data acquisition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huesman, R.H.; Klein, G.J.; Fleming, T.K.

    1996-06-01

    The authors describe a hybrid data acquisition architecture integrating a conventional UNIX workstation with CAMAC-based real-time hardware. The system combines the high-level programming simplicity and user interface of a UNIX workstation with the low-level timing control available from conventional real-time hardware. They detail this architecture as it has been implemented for control of the Donner 600-Crystal Positron Tomograph (PET600). Low-level data acquisition is carried out in this system using eight LeCroy 3588 histogrammers, which together after derandomization, acquire events at rates up to 4 MHz, and two dedicated Motorola 6809 microprocessors, which arbitrate fine timing control during acquisition. A SUNmore » Microsystems UNIX workstation is used for high-level control, allowing an easily extensible user interface in an X-Windows environment, as well as real-time communications to the low-level acquisition units. Communication between the high- and low-level units is carried out via a Jorway 73A SCSI-CAMAC crate controller and a serial interface. For this application, the hybrid configuration segments low from high-level control for ease of maintenance and provided a low-cost upgrade from dated high-level control hardware.« less

  4. DIII-D Neutron Measurement: Status and Plan for Simplification and Upgrade

    NASA Astrophysics Data System (ADS)

    Zhu, Y. B.; Heidbrink, W. W.; Taylor, P. L.; Finkenthal, D.

    2017-10-01

    Neutron diagnostics play key essential roles on DIII-D. Historically an 18-channel 2.45MeV D-D neutron measurement system based on 3He and BF3 proportional counters was inherited from Doublet-III including associated electronics and CAMAC data acquisition. Three fission chambers and two neutron scintillators were added in the 1980s and middle 1990s respectively. For Tritium burn-up studies, two 14MeV D-T neutron measurement systems were installed in 2009 and 2010. Operation and maintenance experience have led to a plan to simplify and upgrade these aging systems to provide a more economical and reliable solution for future DIII-D experiments. On simplification, most conventional expensive NIM and CAMAC modules will be removed. Advanced technologies like ultra-fast data acquisition and software-based pulse identification have been successfully tested. Significant data reduction and efficiency improvement will be achieved by real-time digital pulse identification with a field-programmable gate array. The partly renewed system will consist of 4 neutron counters for absolute calibration and 4 relatively calibrated neutron scintillators covering a wide measurement range. Work supported by US DOE under DE-FC02-04ER54698.

  5. A Compact, Flexible, High Channel Count DAQ Built From Off-the-Shelf Components

    DOE PAGES

    Heffner, M.; Riot, V.; Fabris, L.

    2013-06-01

    Medium to large channel count detectors are usually faced with a few unattractive options for data acquisition (DAQ). Small to medium sized TPC experiments, for example, can be too small to justify the high expense and long development time of application specific integrated circuit (ASIC) development. In some cases an experiment can piggy-back on a larger experiment and the associated ASIC development, but this puts the time line of development out of the hands of the smaller experiment. Another option is to run perhaps thousands of cables to rack mounted equipment, which is clearly undesirable. The development of commercial high-speedmore » high-density FPGAs and ADCs combined with the small discrete components and robotic assembly open a new option that scales to tens of thousands of channels and is only slightly larger than ASICs using off-the-shelf components.« less

  6. High-speed zero-copy data transfer for DAQ applications

    NASA Astrophysics Data System (ADS)

    Pisani, Flavio; Cámpora Pérez, Daniel Hugo; Neufeld, Niko

    2015-05-01

    The LHCb Data Acquisition (DAQ) will be upgraded in 2020 to a trigger-free readout. In order to achieve this goal we will need to connect around 500 nodes with a total network capacity of 32 Tb/s. To get such an high network capacity we are testing zero-copy technology in order to maximize the theoretical link throughput without adding excessive CPU and memory bandwidth overhead, leaving free resources for data processing resulting in less power, space and money used for the same result. We develop a modular test application which can be used with different transport layers. For the zero-copy implementation we choose the OFED IBVerbs API because it can provide low level access and high throughput. We present throughput and CPU usage measurements of 40 GbE solutions using Remote Direct Memory Access (RDMA), for several network configurations to test the scalability of the system.

  7. Hardware/Software Data Acquisition System for Real Time Cell Temperature Monitoring in Air-Cooled Polymer Electrolyte Fuel Cells

    PubMed Central

    Bartolucci, Veronica

    2017-01-01

    This work presents a hardware/software data acquisition system developed for monitoring the temperature in real time of the cells in Air-Cooled Polymer Electrolyte Fuel Cells (AC-PEFC). These fuel cells are of great interest because they can carry out, in a single operation, the processes of oxidation and refrigeration. This allows reduction of weight, volume, cost and complexity of the control system in the AC-PEFC. In this type of PEFC (and in general in any PEFC), the reliable monitoring of temperature along the entire surface of the stack is fundamental, since a suitable temperature and a regular distribution thereof, are key for a better performance of the stack and a longer lifetime under the best operating conditions. The developed data acquisition (DAQ) system can perform non-intrusive temperature measurements of each individual cell of an AC-PEFC stack of any power (from watts to kilowatts). The stack power is related to the temperature gradient; i.e., a higher power corresponds to a higher stack surface, and consequently higher temperature difference between the coldest and the hottest point. The developed DAQ system has been implemented with the low-cost open-source platform Arduino, and it is completed with a modular virtual instrument that has been developed using NI LabVIEW. Temperature vs time evolution of all the cells of an AC-PEFC both together and individually can be registered and supervised. The paper explains comprehensively the developed DAQ system together with experimental results that demonstrate the suitability of the system. PMID:28698497

  8. A micromachined silicon parallel acoustic delay line (PADL) array for real-time photoacoustic tomography (PAT)

    NASA Astrophysics Data System (ADS)

    Cho, Young Y.; Chang, Cheng-Chung; Wang, Lihong V.; Zou, Jun

    2015-03-01

    To achieve real-time photoacoustic tomography (PAT), massive transducer arrays and data acquisition (DAQ) electronics are needed to receive the PA signals simultaneously, which results in complex and high-cost ultrasound receiver systems. To address this issue, we have developed a new PA data acquisition approach using acoustic time delay. Optical fibers were used as parallel acoustic delay lines (PADLs) to create different time delays in multiple channels of PA signals. This makes the PA signals reach a single-element transducer at different times. As a result, they can be properly received by single-channel DAQ electronics. However, due to their small diameter and fragility, using optical fiber as acoustic delay lines poses a number of challenges in the design, construction and packaging of the PADLs, thereby limiting their performances and use in real imaging applications. In this paper, we report the development of new silicon PADLs, which are directly made from silicon wafers using advanced micromachining technologies. The silicon PADLs have very low acoustic attenuation and distortion. A linear array of 16 silicon PADLs were assembled into a handheld package with one common input port and one common output port. To demonstrate its real-time PAT capability, the silicon PADL array (with its output port interfaced with a single-element transducer) was used to receive 16 channels of PA signals simultaneously from a tissue-mimicking optical phantom sample. The reconstructed PA image matches well with the imaging target. Therefore, the silicon PADL array can provide a 16× reduction in the ultrasound DAQ channels for real-time PAT.

  9. Software interface for high-speed readout of particle detectors based on the CoaXPress communication standard

    NASA Astrophysics Data System (ADS)

    Hejtmánek, M.; Neue, G.; Voleš, P.

    2015-06-01

    This article is devoted to the software design and development of a high-speed readout application used for interfacing particle detectors via the CoaXPress communication standard. The CoaXPress provides an asymmetric high-speed serial connection over a single coaxial cable. It uses a widely available 75 Ω BNC standard and can operate in various modes with a data throughput ranging from 1.25 Gbps up to 25 Gbps. Moreover, it supports a low speed uplink with a fixed bit rate of 20.833 Mbps, which can be used to control and upload configuration data to the particle detector. The CoaXPress interface is an upcoming standard in medical imaging, therefore its usage promises long-term compatibility and versatility. This work presents an example of how to develop DAQ system for a pixel detector. For this purpose, a flexible DAQ card was developed using the XILINX Spartan 6 FPGA. The DAQ card is connected to the framegrabber FireBird CXP6 Quad, which is plugged in the PCI Express bus of the standard PC. The data transmission was performed between the FPGA and framegrabber card via the standard coaxial cable in communication mode with a bit rate of 3.125 Gbps. Using the Medipix2 Quad pixel detector, the framerate of 100 fps was achieved. The front-end application makes use of the FireBird framegrabber software development kit and is suitable for data acquisition as well as control of the detector through the registers implemented in the FPGA.

  10. Hardware/Software Data Acquisition System for Real Time Cell Temperature Monitoring in Air-Cooled Polymer Electrolyte Fuel Cells.

    PubMed

    Segura, Francisca; Bartolucci, Veronica; Andújar, José Manuel

    2017-07-09

    This work presents a hardware/software data acquisition system developed for monitoring the temperature in real time of the cells in Air-Cooled Polymer Electrolyte Fuel Cells (AC-PEFC). These fuel cells are of great interest because they can carry out, in a single operation, the processes of oxidation and refrigeration. This allows reduction of weight, volume, cost and complexity of the control system in the AC-PEFC. In this type of PEFC (and in general in any PEFC), the reliable monitoring of temperature along the entire surface of the stack is fundamental, since a suitable temperature and a regular distribution thereof, are key for a better performance of the stack and a longer lifetime under the best operating conditions. The developed data acquisition (DAQ) system can perform non-intrusive temperature measurements of each individual cell of an AC-PEFC stack of any power (from watts to kilowatts). The stack power is related to the temperature gradient; i.e., a higher power corresponds to a higher stack surface, and consequently higher temperature difference between the coldest and the hottest point. The developed DAQ system has been implemented with the low-cost open-source platform Arduino, and it is completed with a modular virtual instrument that has been developed using NI LabVIEW. Temperature vs time evolution of all the cells of an AC-PEFC both together and individually can be registered and supervised. The paper explains comprehensively the developed DAQ system together with experimental results that demonstrate the suitability of the system.

  11. The Effect of Photon Statistics and Pulse Shaping on the Performance of the Wiener Filter Crystal Identification Algorithm Applied to LabPET Phoswich Detectors

    NASA Astrophysics Data System (ADS)

    Yousefzadeh, Hoorvash Camilia; Lecomte, Roger; Fontaine, Réjean

    2012-06-01

    A fast Wiener filter-based crystal identification (WFCI) algorithm was recently developed to discriminate crystals with close scintillation decay times in phoswich detectors. Despite the promising performance of WFCI, the influence of various physical factors and electrical noise sources of the data acquisition chain (DAQ) on the crystal identification process was not fully investigated. This paper examines the effect of different noise sources, such as photon statistics, avalanche photodiode (APD) excess multiplication noise, and front-end electronic noise, as well as the influence of different shaping filters on the performance of the WFCI algorithm. To this end, a PET-like signal simulator based on a model of the LabPET DAQ, a small animal APD-based digital PET scanner, was developed. Simulated signals were generated under various noise conditions with CR-RC shapers of order 1, 3, and 5 having different time constants (τ). Applying the WFCI algorithm to these simulated signals showed that the non-stationary Poisson photon statistics is the main contributor to the identification error of WFCI algorithm. A shaping filter of order 1 with τ = 50 ns yielded the best WFCI performance (error 1%), while a longer shaping time of τ = 100 ns slightly degraded the WFCI performance (error 3%). Filters of higher orders with fast shaping time constants (10-33 ns) also produced good WFCI results (error 1.4% to 1.6%). This study shows the advantage of the pulse simulator in evaluating various DAQ conditions and confirms the influence of the detection chain on the WFCI performance.

  12. Evaluation of the Hitachi 717 analyser.

    PubMed

    Biosca, C; Antoja, F; Sierra, C; Douezi, H; Macià, M; Alsina, M J; Galimany, R

    1989-01-01

    The selective multitest Boehringer Mannheim Hitachi 717 analyser was evaluated according to the guidelines of the Comisión de Instrumentación de la Sociedad Española de Química Clinica and the European Committee for Clinical Laboratory Standards. The evaluation was performed in two steps: examination of the analytical units and evaluation in routine operation.THE EVALUATION OF THE ANALYTICAL UNITS INCLUDED A PHOTOMETRIC STUDY: the inaccuracy is acceptable for 340 and 405 nm; the imprecision ranges from 0.12 to 0.95% at 340 nm and from 0.30 to 0.73 at 405 nm, the linearity shows some dispersion at low absorbance for NADH at 340 nm, the drift is negligible, the imprecision of the pipette delivery system increases when the sample pipette operates with 3 mul, the reagent pipette imprecision is acceptable and the temperature control system is good.UNDER ROUTINE WORKING CONDITIONS, SEVEN DETERMINATIONS WERE STUDIED: glucose, creatinine, iron, total protein, AST, ALP and calcium. The within-run imprecision (CV) ranged from 0.6% for total protein and AST to 6.9% for iron. The between run imprecision ranged from 2.4% for glucose to 9.7% for iron. Some contamination was found in the carry-over study. The relative inaccuracy is good for all the constituents assayed.

  13. Nanomechanical motion measured with an imprecision below that at the standard quantum limit.

    PubMed

    Teufel, J D; Donner, T; Castellanos-Beltran, M A; Harlow, J W; Lehnert, K W

    2009-12-01

    Nanomechanical oscillators are at the heart of ultrasensitive detectors of force, mass and motion. As these detectors progress to even better sensitivity, they will encounter measurement limits imposed by the laws of quantum mechanics. If the imprecision of a measurement of the displacement of an oscillator is pushed below a scale set by the standard quantum limit, the measurement must perturb the motion of the oscillator by an amount larger than that scale. Here we show a displacement measurement with an imprecision below the standard quantum limit scale. We achieve this imprecision by measuring the motion of a nanomechanical oscillator with a nearly shot-noise limited microwave interferometer. As the interferometer is naturally operated at cryogenic temperatures, the thermal motion of the oscillator is minimized, yielding an excellent force detector with a sensitivity of 0.51 aN Hz(-1/2). This measurement is a critical step towards observing quantum behaviour in a mechanical object.

  14. Improving the estimation of flavonoid intake for study of health outcomes

    USDA-ARS?s Scientific Manuscript database

    Imprecision in estimating intakes of non-nutrient bioactive compounds such as flavonoids is a challenge in epidemiologic studies of health outcomes. The sources of this imprecision, using flavonoids as an example, include the variability of bioactive compounds in foods due to differences in growing ...

  15. Scheduling periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1987-01-01

    One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.

  16. Data acquisition in a high-speed rotating frame for New Mexico Institute of Mining and Technology liquid sodium αω dynamo experiment.

    PubMed

    Si, Jiahe; Colgate, Stirling A; Li, Hui; Martinic, Joe; Westpfahl, David

    2013-10-01

    New Mexico Institute of Mining and Technology liquid sodium αω-dynamo experiment models the magnetic field generation in the universe as discussed in detail by Colgate, Li, and Pariev [Phys. Plasmas 8, 2425 (2001)]. To obtain a quasi-laminar flow with magnetic Reynolds number R(m) ~ 120, the dynamo experiment consists of two co-axial cylinders of 30.5 cm and 61 cm in diameter spinning up to 70 Hz and 17.5 Hz, respectively. During the experiment, the temperature of the cylinders must be maintained to 110 °C to ensure that the sodium remains fluid. This presents a challenge to implement a data acquisition (DAQ) system in such high temperature, high-speed rotating frame, in which the sensors (including 18 Hall sensors, 5 pressure sensors, and 5 temperature sensors, etc.) are under the centrifugal acceleration up to 376g. In addition, the data must be transmitted and stored in a computer 100 ft away for safety. The analog signals are digitized, converted to serial signals by an analog-to-digital converter and a field-programmable gate array. Power is provided through brush/ring sets. The serial signals are sent through ring/shoe sets capacitively, then reshaped with cross-talk noises removed. A microcontroller-based interface circuit is used to decode the serial signals and communicate with the data acquisition computer. The DAQ accommodates pressure up to 1000 psi, temperature up to more than 130 °C, and magnetic field up to 1000 G. First physics results have been analyzed and published. The next stage of the αω-dynamo experiment includes the DAQ system upgrade.

  17. Feasibility study of the design of Bi Ra Systems, Incorporated model 5301, 5101, and 3222 CAMAC modules for space use

    NASA Technical Reports Server (NTRS)

    Biswell, L.; Mcelderry, R.

    1976-01-01

    Cost estimates are determined for redesigned modules. Consideration is given to incorporation of NASA approved components, component screening and documentation, as well as reduced power consumption. Results show that r designed modules will function reliably in a space environment of 50 C and withstand greater than 15 G's of random vibration between 40 Hz and 400 Hz.

  18. Inducing Fuzzy Models for Student Classification

    ERIC Educational Resources Information Center

    Nykanen, Ossi

    2006-01-01

    We report an approach for implementing predictive fuzzy systems that manage capturing both the imprecision of the empirically induced classifications and the imprecision of the intuitive linguistic expressions via the extensive use of fuzzy sets. From end-users' point of view, the approach enables encapsulating the technical details of the…

  19. Imprecision and Uncertainty in the UFO Database Model.

    ERIC Educational Resources Information Center

    Van Gyseghem, Nancy; De Caluwe, Rita

    1998-01-01

    Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…

  20. The Need for Precision

    ERIC Educational Resources Information Center

    Weinberg, David R.

    2012-01-01

    People have become accustomed to the imprecision of language, though imprecise language has a subtle way of misguiding thoughts and actions. In this article, the author argues that the term "teacher" in reference to the Montessori practitioner is a distortion of everything Maria Montessori tried to undo about traditional education. In dealing with…

  1. LIMITATIONS ON THE USES OF MULTIMEDIA EXPOSURE MEASUREMENTS FOR MULTIPATHWAY EXPOSURE ASSESSMENT - PART II: EFFECTS OF MISSING DATA AND IMPRECISION

    EPA Science Inventory

    Multimedia data from two probability-based exposure studies were investigated in terms of how missing data and measurement-error imprecision affected estimation of population parameters and associations. Missing data resulted mainly from individuals' refusing to participate in c...

  2. Attending to Precision with Secret Messages

    ERIC Educational Resources Information Center

    Starling, Courtney; Whitacre, Ian

    2016-01-01

    Mathematics is a language that is characterized by words and symbols that have precise definitions. Many opportunities exist for miscommunication in mathematics if the words and symbols are interpreted incorrectly or used in imprecise ways. In fact, it is found that imprecision is a common source of mathematical disagreements and misunderstandings…

  3. Analog Input Data Acquisition Software

    NASA Technical Reports Server (NTRS)

    Arens, Ellen

    2009-01-01

    DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.

  4. A Measurement of Neutral B Mixing using Di-Lepton Events with the BaBar Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunawardane, Naveen

    This thesis reports on a measurement of the neutral B meson mixing parameter, Δm d, at the BABAR experiment and the work carried out on the electromagnetic calorimeter (EMC) data acquisition (DAQ) system and simulation software.

  5. Performance of the NOνA Data Acquisition and Trigger Systems for the full 14 kT Far Detector

    NASA Astrophysics Data System (ADS)

    Norman, A.; Davies, G. S.; Ding, P. F.; Dukes, E. C.; Duyan, H.; Frank, M. J.; R. C. Group; Habig, A.; Henderson, W.; Niner, E.; Mina, R.; Moren, A.; Mualem, L.; Oksuzian, Y.; Rebel, B.; Shanahan, P.; Sheshukov, A.; Tamsett, M.; Tomsen, K.; Vinton, L.; Wang, Z.; Zamorano, B.; Zirnstien, J.

    2015-12-01

    The NOvA experiment uses a continuous, free-running, dead-timeless data acquisition system to collect data from the 14 kT far detector. The DAQ system readouts the more than 344,000 detector channels and assembles the information into an raw unfiltered high bandwidth data stream. The NOvA trigger systems operate in parallel to the readout and asynchronously to the primary DAQ readout/event building chain. The data driven triggering systems for NOvA are unique in that they examine long contiguous time windows of the high resolution readout data and enable the detector to be sensitive to a wide range of physics interactions from those with fast, nanosecond scale signals up to processes with long delayed coincidences between hits which occur at the tens of milliseconds time scale. The trigger system is able to achieve a true 100% live time for the detector, making it sensitive to both beam spill related and off-spill physics.

  6. PCI/iRMX-Based Front-End Data Acquisition for the HT-7U Experiment

    NASA Astrophysics Data System (ADS)

    Shu, Yantai; Luo, Jiarong; Yan, Jianbing; Zhao, Feng; Zhang, Liang

    2004-06-01

    A PCI/iRMX-based front-end system is being designed to serve as data acquisition (DAQ) subsystem for the HT-7U superconducting tokamak. The diagnostic instruments are connected to four analog-to-digital converter (ADC) boards that are directly plugged into the peripheral component interconnect (PCI) bus of a personal computer (PC) running the iRMX real-time operating system. Each ADC board has eight channels. The sampling rate of each channel can be up to 125 K samples per second. The acquired data are directly transferred from the ADC board into the memory of the PC, and then transferred to servers through the network. As a testbed, one PCI/iRMX subsystem has been built and has acquired data from the existing HT-7 tokamak. The DAQ can easily support a wide range of pulse lengths, even matching extremely long pulse and steady-state operation. This paper describes the system design and performance evaluation in detail.

  7. Fast data transmission in dynamic data acquisition system for plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Byszuk, Adrian; Poźniak, Krzysztof; Zabołotny, Wojciech M.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Cieszewski, Radosław; Juszczyk, Bartłomiej; Kolasiński, Piotr; Zienkiewicz, Paweł; Chernyshova, Maryna; Czarski, Tomasz

    2014-11-01

    This paper describes architecture of a new data acquisition system (DAQ) targeted mainly at plasma diagnostic experiments. Modular architecture, in combination with selected hardware components, allows for straightforward reconfiguration of the whole system, both offline and online. Main emphasis will be put into the implementation of data transmission subsystem in said system. One of the biggest advantages of described system is modular architecture with well defined boundaries between main components: analog frontend (AFE), digital backplane and acquisition/control software. Usage of a FPGA chips allows for a high flexibility in design of analog frontends, including ADC <--> FPGA interface. Data transmission between backplane boards and user software was accomplished with the use of industry-standard PCI Express (PCIe) technology. PCIe implementation includes both FPGA firmware and Linux device driver. High flexibility of PCIe connections was accomplished due to use of configurable PCIe switch. Whenever it's possible, described DAQ system tries to make use of standard off-the-shelf (OTF) components, including typical x86 CPU & motherboard (acting as PCIe controller) and cabling.

  8. RapidIO as a multi-purpose interconnect

    NASA Astrophysics Data System (ADS)

    Baymani, Simaolhoda; Alexopoulos, Konstantinos; Valat, Sébastien

    2017-10-01

    RapidIO (http://rapidio.org/) technology is a packet-switched high-performance fabric, which has been under active development since 1997. Originally meant to be a front side bus, it developed into a system level interconnect which is today used in all 4G/LTE base stations world wide. RapidIO is often used in embedded systems that require high reliability, low latency and scalability in a heterogeneous environment - features that are highly interesting for several use cases, such as data analytics and data acquisition (DAQ) networks. We will present the results of evaluating RapidIO in a data analytics environment, from setup to benchmark. Specifically, we will share the experience of running ROOT and Hadoop on top of RapidIO. To demonstrate the multi-purpose characteristics of RapidIO, we will also present the results of investigating RapidIO as a technology for high-speed DAQ networks using a generic multi-protocol event-building emulation tool. In addition we will present lessons learned from implementing native ports of CERN applications to RapidIO.

  9. Inverse relationship between stigma and quality of life in India: is epilepsy a disabling neurological condition?

    PubMed

    Nehra, Ashima; Singla, Sweta; Bajpai, Swati; Malviya, Shrividhya; Padma, Vasantha; Tripathi, Manjari

    2014-10-01

    Stigma associated with epilepsy has negative effects on psychosocial outcomes, affecting quality of life (QOL) and increasing disease burden in persons with epilepsy (PWEs). The aim of our study was to measure the impact of stigma on the QOL of PWEs and the prevalence of neurological disability due to stigmatized epilepsy. A prospective observational study with a sample of 208 PWEs was conducted. Neuropsychological Tests used were the Indian Disability Evaluation Assessment Scale (IDEAS) to measure disability, the Dysfunctional Analysis Questionnaire (DAQ) to measure QOL, and the Stigma Scale for Epilepsy (SSE) to assess stigma. Spearman correlation was calculated, and stigma (SSE) was highly significant with QOL (DAQ) (0.019) and disability due to stigmatized epilepsy (IDEAS) (0.011). The present study supports the global perception of stigma associated with epilepsy and its negative impact on their overall QOL and its contribution to the escalation of the disease burden. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. The CMS High Level Trigger System: Experience and Future Development

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.

    2012-12-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  11. Continued Data Acquisition Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwellenbach, David

    This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muonmore » tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.« less

  12. FPGA-based real-time swept-source OCT systems for B-scan live-streaming or volumetric imaging

    NASA Astrophysics Data System (ADS)

    Bandi, Vinzenz; Goette, Josef; Jacomet, Marcel; von Niederhäusern, Tim; Bachmann, Adrian H.; Duelk, Marcus

    2013-03-01

    We have developed a Swept-Source Optical Coherence Tomography (Ss-OCT) system with high-speed, real-time signal processing on a commercially available Data-Acquisition (DAQ) board with a Field-Programmable Gate Array (FPGA). The Ss-OCT system simultaneously acquires OCT and k-clock reference signals at 500MS/s. From the k-clock signal of each A-scan we extract a remap vector for the k-space linearization of the OCT signal. The linear but oversampled interpolation is followed by a 2048-point FFT, additional auxiliary computations, and a data transfer to a host computer for real-time, live-streaming of B-scan or volumetric C-scan OCT visualization. We achieve a 100 kHz A-scan rate by parallelization of our hardware algorithms, which run on standard and affordable, commercially available DAQ boards. Our main development tool for signal analysis as well as for hardware synthesis is MATLAB® with add-on toolboxes and 3rd-party tools.

  13. Neutron time-of-flight spectroscopy measurement using a waveform digitizer

    NASA Astrophysics Data System (ADS)

    Liu, Long-Xiang; Wang, Hong-Wei; Ma, Yu-Gang; Cao, Xi-Guang; Cai, Xiang-Zhou; Chen, Jin-Gen; Zhang, Gui-Lin; Han, Jian-Long; Zhang, Guo-Qiang; Hu, Ji-Feng; Wang, Xiao-He

    2016-05-01

    The photoneutron source (PNS, phase 1), an electron linear accelerator (linac)-based pulsed neutron facility that uses the time-of-flight (TOF) technique, was constructed for the acquisition of nuclear data from the Thorium Molten Salt Reactor (TMSR) at the Shanghai Institute of Applied Physics (SINAP). The neutron detector signal used for TOF calculation, with information on the pulse arrival time, pulse shape, and pulse height, was recorded by using a waveform digitizer (WFD). By using the pulse height and pulse-shape discrimination (PSD) analysis to identify neutrons and γ-rays, the neutron TOF spectrum was obtained by employing a simple electronic design, and a new WFD-based DAQ system was developed and tested in this commissioning experiment. The DAQ system developed is characterized by a very high efficiency with respect to millisecond neutron TOF spectroscopy. Supported by Strategic Priority Research Program of the Chinese Academy of Science(TMSR) (XDA02010100), National Natural Science Foundation of China(NSFC)(11475245,No.11305239), Shanghai Key Laboratory of Particle Physics and Cosmology (11DZ2260700)

  14. Development of 3He LPSDs and read-out system for the SANS spectrometer at CPHS

    NASA Astrophysics Data System (ADS)

    Huang, T. C.; Gong, H.; Shao, B. B.; Wang, X. W.; Zhang, Y.; Pang, B. B.

    2014-01-01

    The Compact Pulsed Hadron Source (CPHS) is a 13-MeV proton-linac-driven neutron source under construction in Tsinghua University. Time-of-flight (TOF) small-angle neutron scattering (SANS) spectrometer is one of the first instruments to be built. It is designed to use linear position-sensitive detectors (LPSDs) of 3He gas proportional counters to cover a 1 m×1 m area. Prototypical LPSDs (Φ = 12 mm, L=1 m) have been made and read-out system is developed based on charge division. This work describes the in-house fabrication of the prototypical LPSDs and design of the read-out system including front-end electronics and data acquisition (DAQ) system. Key factors of the front-end electronics are studied and optimized with PSPICE simulation. DAQ system is designed based on VME bus architecture and FPGA Mezzanine Card (FMC) standard with high flexibility and extendibility. Preliminary experiments are carried out and the results are present and discussed.

  15. A new approach for the solution of fuzzy games

    NASA Astrophysics Data System (ADS)

    Krishnaveni, G.; Ganesan, K.

    2018-04-01

    In this paper, a new approach is proposed to solve the games with imprecise entries in its payoff matrix. All these imprecise entries are assumed to be trapezoidal fuzzy numbers. Also the proposed approach provides fuzzy optimal solution of the fuzzy valued game without converting to classical version. A numerical example is provided.

  16. Evaluation and selection of sustainable suppliers in supply chain using new GP-DEA model with imprecise data

    NASA Astrophysics Data System (ADS)

    Jafarzadeh Ghoushchi, Saeid; Dodkanloi Milan, Mehran; Jahangoshai Rezaee, Mustafa

    2017-11-01

    Nowadays, with respect to knowledge growth about enterprise sustainability, sustainable supplier selection is considered a vital factor in sustainable supply chain management. On the other hand, usually in real problems, the data are imprecise. One method that is helpful for the evaluation and selection of the sustainable supplier and has the ability to use a variety of data types is data envelopment analysis (DEA). In the present article, first, the supplier efficiency is measured with respect to all economic, social and environmental dimensions using DEA and applying imprecise data. Then, to have a general evaluation of the suppliers, the DEA model is developed using imprecise data based on goal programming (GP). Integrating the set of criteria changes the new model into a coherent framework for sustainable supplier selection. Moreover, employing this model in a multilateral sustainable supplier selection can be an incentive for the suppliers to move towards environmental, social and economic activities. Improving environmental, economic and social performance will mean improving the supply chain performance. Finally, the application of the proposed approach is presented with a real dataset.

  17. Scheduling real-time, periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Liu, Jane W. S.; Lin, Kwei-Jay; Natarajan, Swaminathan

    1987-01-01

    A process is called a monotone process if the accuracy of its intermediate results is non-decreasing as more time is spent to obtain the result. The result produced by a monotone process upon its normal termination is the desired result; the error in this result is zero. External events such as timeouts or crashes may cause the process to terminate prematurely. If the intermediate result produced by the process upon its premature termination is saved and made available, the application may still find the result unusable and, hence, acceptable; such a result is said to be an imprecise one. The error in an imprecise result is nonzero. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. This problem differs from the traditional scheduling problems since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result. Consequently, the amounts of processor time assigned to tasks in a valid schedule can be less than the amounts of time required to complete the tasks. A meaningful formulation of this problem taking into account the quality of the overall result is discussed. Three algorithms for scheduling jobs for which the effects of errors in results produced in different periods are not cumulative are described, and their relative merits are evaluated.

  18. ’Do-It-Yourself’ Fallout/Blast Shelter Evaluation

    DTIC Science & Technology

    1984-03-01

    N4AME & AOORIESS(I! dittvrevI !M’", Controlling Olif~t) IS. SEC’.JRITY CL-ASS. (GO this report) Lawrence Livermore National Laboratory Unclassified P...the data from the transient recorder iemory tirough the Computer Automated Measurement and Control (CAMAC) data busa und stores them on an $-inch...Command and Control Technical Center Emergency Technology Division Department of Defense 0a& Ridge Natioual Laboratory The Pentagon Attn: Librarian

  19. 75 FR 47218 - Approval and Promulgation of Implementation Plans and Designation of Areas for Air Quality...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-05

    ... Kentucky, through the Kentucky Energy and Environment Cabinet, Division for Air Quality (DAQ), to... (MVEBs) for nitrogen oxides (NO X ) and volatile organic compounds (VOC) for Northern Kentucky. This... control, Incorporation by reference, Nitrogen dioxide, Ozone, Intergovernmental relations, and Volatile...

  20. IceCube

    Science.gov Websites

    . PDF file High pT muons in Cosmic-Ray Air Showers with IceCube. PDF file IceCube Performance with Artificial Light Sources: the road to a Cascade Analyses + Energy scale calibration for EHE. PDF file , 2006. PDF file Thorsten Stetzelberger "IceCube DAQ Design & Performance" Nov 2005 PPT

  1. Inexpensive Data Acquisition with a Sound Card

    ERIC Educational Resources Information Center

    Hassan, Umer; Pervaiz, Saad; Anwar, Muhammad Sabieh

    2011-01-01

    Signal generators, oscilloscopes, and data acquisition (DAQ) systems are standard components of the modern experimental physics laboratory. The sound card, a built-in component in the ubiquitous personal computer, can be utilized for all three of these tasks and offers an attractive option for labs in developing countries such as…

  2. Evaluation of Bole Straightness in Cottonwood Using Visual Scores

    Treesearch

    D.T. Cooper; R.B. Ferguson

    1981-01-01

    Selection for straightness in natural stands of cottonwood can be effective in improving straightness of open-pollinated progeny. Straightness appears to be highly heritable, but it is subject to imprecise evaluation. This can be largely overcome by repeated application of an imprecise scoring system using a minimum of two views per tree separated by 90 degrees.

  3. Imprecise Frequency Descriptors and the Miscomprehension of Prescription Drug Advertising: Public Policy and Regulatory Implications.

    ERIC Educational Resources Information Center

    Davis, Joel J.

    1999-01-01

    Explores the communicative effectiveness of imprecise frequency descriptors within the context of consumer prescription drug advertising. Conducts two separate studies using a total sample of 147 adults. Finds that consumers are unable to accurately estimate the relative likelihood of side effect occurrence when a list of side effects are preceded…

  4. How Often is "Often"? The Use of Imprecise Terms in Exam Items.

    ERIC Educational Resources Information Center

    Case, Susan M.

    This study was designed to gather data on the meaning of imprecise terms from items written by physicians for their students and by test committees for national licensure and certification examinations. A total of 32 members of test committees who write examination items for various medical specialty examinations participated in the study. Each…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouzes, R.T.; Piilonen, L.; Schreiber, D.

    Apple microcomputers have been combined with CAMAC to produce data acquisition systems used for a variety of applications at the Princeton Cyclotron Laboratory. Two specific implementations are discussed: a general one or two parameter MCA system and a specific eleven parameter system. A multiplicity of off-line experiments led to the need for these systems having data manipulation and control ability beyond that of low cost systems available commercially. A serial communications port allows for data transfer to the main computer for more complete analysis.

  6. SPORT-SPEAR Mark III Electronics (Engineering Materials)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Drawing List DL 135-678-00-RO and the drawings listed thereon provide the specifications for construction of the SPORT-SPEAR Mark III Electronics. SPORT stands for Smark Port. This device is an adapter for the SLAC BADC (Brilliant Analog to Digital Converter) providing up to 5 ports whereas the BADC and SPORT takes signals from experimental equipment and directs them to other equipment and micro computers for processing and storing. These units are housed in standard Camac crates.

  7. International Conference on Free Electron Lasers (11th) Conference Digest Held in Naples, Florida on 28 August-1 September 1989

    DTIC Science & Technology

    1989-12-01

    Porn -20 t0 -2S0 kW arns, eavenate owev in Va nutcom,e ranged "mis -0.2 141 P2.19 DIRECT SPECTRAL MEASUREMENTS OF THE UCSS FEL G. Ramian and J. Hu...8217 . ~ ~ , . ., U M-MIRROR VID - -1T VIDEO DIC.ITI2EFN__________________________ TV- TV CAMERA MM -128 KBIYTE MEMORY MN-’ MONITOR IF - PI8 TO CAMAC IN

  8. Fusion of Imperfect Information in the Unified Framework of Random Sets Theory: Application to Target Identification

    DTIC Science & Technology

    2007-11-01

    Florea, Anne-Laure Jousselme, Éloi Bossé ; DRDC Valcartier TR 2003-319 ; R & D pour la défense Canada – Valcartier ; novembre 2007. Contexte : Pour...12 3.3.2 Imprecise information . . . . . . . . . . . . . . . . . . . . . 13 3.3.3 Uncertain and imprecise information...information proposed by Philippe Smets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Figure 5: The process of information modelling

  9. Evidential Networks for Fault Tree Analysis with Imprecise Knowledge

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng

    2012-06-01

    Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.

  10. How to Fully Represent Expert Information about Imprecise Properties in a Computer System – Random Sets, Fuzzy Sets, and Beyond: An Overview

    PubMed Central

    Nguyen, Hung T.; Kreinovich, Vladik

    2014-01-01

    To help computers make better decisions, it is desirable to describe all our knowledge in computer-understandable terms. This is easy for knowledge described in terms on numerical values: we simply store the corresponding numbers in the computer. This is also easy for knowledge about precise (well-defined) properties which are either true or false for each object: we simply store the corresponding “true” and “false” values in the computer. The challenge is how to store information about imprecise properties. In this paper, we overview different ways to fully store the expert information about imprecise properties. We show that in the simplest case, when the only source of imprecision is disagreement between different experts, a natural way to store all the expert information is to use random sets; we also show how fuzzy sets naturally appear in such random-set representation. We then show how the random-set representation can be extended to the general (“fuzzy”) case when, in addition to disagreements, experts are also unsure whether some objects satisfy certain properties or not. PMID:25386045

  11. Fuzzy adaptive iterative learning coordination control of second-order multi-agent systems with imprecise communication topology structure

    NASA Astrophysics Data System (ADS)

    Chen, Jiaxi; Li, Junmin

    2018-02-01

    In this paper, we investigate the perfect consensus problem for second-order linearly parameterised multi-agent systems (MAS) with imprecise communication topology structure. Takagi-Sugeno (T-S) fuzzy models are presented to describe the imprecise communication topology structure of leader-following MAS, and a distributed adaptive iterative learning control protocol is proposed with the dynamic of leader unknown to any of the agent. The proposed protocol guarantees that the follower agents can track the leader perfectly on [0,T] for the consensus problem. Under alignment condition, a sufficient condition of the consensus for closed-loop MAS is given based on Lyapunov stability theory. Finally, a numerical example and a multiple pendulum system are given to illustrate the effectiveness of the proposed algorithm.

  12. Alternative Fuels Data Center: Utah's Clean Fuels and Vehicle Technology

    Science.gov Websites

    vehicles, infrastructure, and equipment. As an agency of Utah's Department of Environmental Quality, DAQ legislation that created the fund typically sets forth other important provisions related to funding and , or a combination of the two. Enabling legislation also gives a state agency or department the

  13. 75 FR 7474 - Adequacy Status of the North Carolina Portion of the Charlotte-Gastonia-Rock Hill Bi-State Area...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-19

    ... plan (SIP) means that transportation activities will not produce new air quality violations, worsen...- Hour Ozone Sub-Area Motor Vehicle Emission Budgets for Transportation Conformity Purposes AGENCY... the North Carolina Department of Air Quality (NC DAQ), are adequate for transportation conformity...

  14. The DAQ system for the AEḡIS experiment

    NASA Astrophysics Data System (ADS)

    Prelz, F.; Aghion, S.; Amsler, C.; Ariga, T.; Bonomi, G.; Brusa, R. S.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Hinterberger, A.; Holmestad, H.; Kellerbauer, A.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lebrun, P.; Malbrunot, C.; Mariazzi, S.; Matveev, V.; Mazzotta, Z.; Müller, S. R.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pacifico, N.; Pagano, D.; Penasa, L.; Petracek, V.; Prevedelli, M.; Ravelli, L.; Rienaecker, B.; Robert, J.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Simon, M.; Smestad, L.; Sorrentino, F.; Testera, G.; Tietje, I. C.; Widmann, E.; Yzombard, P.; Zimmer, C.; Zmeskal, J.; Zurlo, N.

    2017-10-01

    In the sociology of small- to mid-sized (O(100) collaborators) experiments the issue of data collection and storage is sometimes felt as a residual problem for which well-established solutions are known. Still, the DAQ system can be one of the few forces that drive towards the integration of otherwise loosely coupled detector systems. As such it may be hard to complete with off-the-shelf components only. LabVIEW and ROOT are the (only) two software systems that were assumed to be familiar enough to all collaborators of the AEḡIS (AD6) experiment at CERN: working out of the GXML representation of LabVIEW Data types, a semantically equivalent representation as ROOT TTrees was developed for permanent storage and analysis. All data in the experiment is cast into this common format and can be produced and consumed on both systems and transferred over TCP and/or multicast over UDP for immediate sharing over the experiment LAN. We describe the setup that has been able to cater to all run data logging and long term monitoring needs of the AEḡIS experiment so far.

  15. Upgrades of DARWIN, a dose and spectrum monitoring system applicable to various types of radiation over wide energy ranges

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Satoh, Daiki; Endo, Akira; Shigyo, Nobuhiro; Watanabe, Fusao; Sakurai, Hiroki; Arai, Yoichi

    2011-05-01

    A dose and spectrum monitoring system applicable to neutrons, photons and muons over wide ranges of energy, designated as DARWIN, has been developed for radiological protection in high-energy accelerator facilities. DARWIN consists of a phoswitch-type scintillation detector, a data-acquisition (DAQ) module for digital waveform analysis, and a personal computer equipped with a graphical-user-interface (GUI) program for controlling the system. The system was recently upgraded by introducing an original DAQ module based on a field programmable gate array, FPGA, and also by adding a function for estimating neutron and photon spectra based on an unfolding technique without requiring any specific scientific background of the user. The performance of the upgraded DARWIN was examined in various radiation fields, including an operational field in J-PARC. The experiments revealed that the dose rates and spectra measured by the upgraded DARWIN are quite reasonable, even in radiation fields with peak structures in terms of both spectrum and time variation. These results clearly demonstrate the usefulness of DARWIN for improving radiation safety in high-energy accelerator facilities.

  16. DAQ Software Contributions, Absolute Scale Energy Calibration and Background Evaluation for the NOvA Experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flumerfelt, Eric Lewis

    2015-08-01

    The NOvA (NuMI Off-axis v e [nu_e] Appearance) Experiment is a long-baseline accelerator neutrino experiment currently in its second year of operations. NOvA uses the Neutrinos from the Main Injector (NuMI) beam at Fermilab, and there are two main off-axis detectors: a Near Detector at Fermilab and a Far Detector 810 km away at Ash River, MN. The work reported herein is in support of the NOvA Experiment, through contributions to the development of data acquisition software, providing an accurate, absolute-scale energy calibration for electromagnetic showers in NOvA detector elements, crucial to the primary electron neutrino search, and through anmore » initial evaluation of the cosmic background rate in the NOvA Far Detector, which is situated on the surface without significant overburden. Additional support work for the NOvA Experiment is also detailed, including DAQ Server Administration duties and a study of NOvA’s sensitivity to neutrino oscillations into a “sterile” state.« less

  17. Enhanced performance for differential detection in coherent Brillouin optical time-domain analysis sensors

    NASA Astrophysics Data System (ADS)

    Shao, Liyang; Zhang, Yunpeng; Li, Zonglei; Zhang, Zhiyong; Zou, Xihua; Luo, Bin; Pan, Wei; Yan, Lianshan

    2016-11-01

    Logarithmic detectors (LogDs) have been used in coherent Brillouin optical time-domain analysis (BOTDA) sensors to reduce the effect of phase fluctuation, demodulation complexities, and measurement time. However, because of the inherent properties of LogDs, a DC component at the level of hundreds of millivolts that prohibits high-gain signal amplification (SA) could be generated, resulting in unacceptable data acquisition (DAQ) inaccuracies and decoding errors in the process of prototype integration. By generating a reference light at a level similar to the probe light, differential detection can be applied to remove the DC component automatically using a differential amplifier before the DAQ process. Therefore, high-gain SA can be employed to reduce quantization errors. The signal-to-noise ratio of the weak Brillouin gain signal is improved from ˜11.5 to ˜21.8 dB. A BOTDA prototype is implemented based on the proposed scheme. The experimental results show that the measurement accuracy of the Brillouin frequency shift (BFS) is improved from ±1.9 to ±0.8 MHz at the end of a 40-km sensing fiber.

  18. A Monitoring System for the LHCb Data Flow

    NASA Astrophysics Data System (ADS)

    Barbosa, João; Gaspar, Clara; Jost, Beat; Frank, Markus; Cardoso, Luis G.

    2017-06-01

    The LHCb experiment uses the LHC accelerator for the collisions that produce the physics data necessary for analysis. The data produced by the detector by measuring the results of the collisions at a rate of 40 MHz are read out by a complex data acquisition (DAQ) system, which is summarily described in this paper. Distributed systems of such dimensions rely on monitoring and control systems that account for the numerous faults that can happen throughout the whole operation. With this in mind, a new system was created to extend the monitoring of the readout system, in this case by providing an overview of what is happening in each stage of the DAQ process, starting in the hardware trigger performed right after the detector measurements and ending in the local storage of the experiment. This system, a complement to the current run control (experimental control system), intends to shorten reaction times when a problem occurs by providing the operators with detailed information of where a certain fault is occurring. The architecture of the tool and its utilization by the experiment operators are described in this paper.

  19. Real Time Data Acquisition and Online Signal Processing for Magnetoencephalography

    NASA Astrophysics Data System (ADS)

    Rongen, H.; Hadamschek, V.; Schiek, M.

    2006-06-01

    To establish improved therapies for patients suffering from severe neurological and psychiatric diseases, a demand controlled and desynchronizing brain-pacemaker has been developed with techniques from statistical physics and nonlinear dynamics. To optimize the novel therapeutic approach, brain activity is investigated with a Magnetoencephalography (MEG) system prior to surgery. For this, a real time data acquisition system for a 148 channel MEG and online signal processing for artifact rejection, filtering, cross trial phase resetting analysis and three-dimensional (3-D) reconstruction of the cerebral current sources was developed. The developed PCI bus hardware is based on a FPGA and DSP design, using the benefits from both architectures. The reconstruction and visualization of the 3-D volume data is done by the PC which hosts the real time DAQ and pre-processing board. The framework of the MEG-online system is introduced and the architecture of the real time DAQ board and online reconstruction is described. In addition we show first results with the MEG-Online system for the investigation of dynamic brain activities in relation to external visual stimulation, based on test data sets.

  20. Validation of an "Intelligent Mouthguard" Single Event Head Impact Dosimeter.

    PubMed

    Bartsch, Adam; Samorezov, Sergey; Benzel, Edward; Miele, Vincent; Brett, Daniel

    2014-11-01

    Dating to Colonel John Paul Stapp MD in 1975, scientists have desired to measure live human head impacts with accuracy and precision. But no instrument exists to accurately and precisely quantify single head impact events. Our goal is to develop a practical single event head impact dosimeter known as "Intelligent Mouthguard" and quantify its performance on the benchtop, in vitro and in vivo. In the Intelligent Mouthguard hardware, limited gyroscope bandwidth requires an algorithm-based correction as a function of impact duration. After we apply gyroscope correction algorithm, Intelligent Mouthguard results at time of CG linear acceleration peak correlate to the Reference Hybrid III within our tested range of pulse durations and impact acceleration profiles in American football and Boxing in vitro tests: American football, IMG=1.00REF-1.1g, R2=0.99; maximum time of peak XYZ component imprecision 3.6g and 370 rad/s2; maximum time of peak azimuth and elevation imprecision 4.8° and 2.9°; maximum average XYZ component temporal imprecision 3.3g and 390 rad/s2. Boxing, IMG=1.00REF-0.9 g, R2=0.99, R2=0.98; maximum time of peak XYZ component imprecision 3.9 g and 390 rad/s2, maximum time of peak azimuth and elevation imprecision 2.9° and 2.1°; average XYZ component temporal imprecision 4.0 g and 440 rad/s2. In vivo Intelligent Mouthguard true positive head impacts from American football players and amateur boxers have temporal characteristics (first harmonic frequency from 35 Hz to 79 Hz) within our tested benchtop (first harmonic frequency<180 Hz) and in vitro (first harmonic frequency<100 Hz) ranges. Our conclusions apply only to situations where the rigid body assumption is valid, sensor-skull coupling is maintained and the ranges of tested parameters and harmonics fall within the boundaries of harmonics validated in vitro. For these situations, Intelligent Mouthguard qualifies as a single event dosimeter in American football and Boxing.

  1. Development of an ADC radiation tolerance characterization system for the upgrade of the ATLAS LAr calorimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hong-Bin; Chen, Hu-Cheng; Chen, Kai

    ATLAS LAr calorimeter will undergo its Phase-I upgrade during the long shutdown (LS2) in 2018, and a new LAr Trigger Digitizer Board (LTDB) will be designed and installed. Several commercial-off-the-shelf (COTS) multi-channel high-speed ADCs have been selected as possible backups of the radiation tolerant ADC ASICs for the LTDB. Here, to evaluate the radiation tolerance of these backup commercial ADCs, we developed an ADC radiation tolerance characterization system, which includes the ADC boards, data acquisition (DAQ) board, signal generator, external power supplies and a host computer. The ADC board is custom designed for different ADCs, with ADC drivers and clockmore » distribution circuits integrated on board. The Xilinx ZC706 FPGA development board is used as a DAQ board. The data from the ADC are routed to the FPGA through the FMC (FPGA Mezzanine Card) connector, de-serialized and monitored by the FPGA, and then transmitted to the host computer through the Gigabit Ethernet. A software program has been developed with Python, and all the commands are sent to the DAQ board through Gigabit Ethernet by this program. Two ADC boards have been designed for the ADC, ADS52J90 from Texas Instruments and AD9249 from Analog Devices respectively. TID tests for both ADCs have been performed at BNL, and an SEE test for the ADS52J90 has been performed at Massachusetts General Hospital (MGH). Test results have been analyzed and presented. The test results demonstrate that this test system is very versatile, and works well for the radiation tolerance characterization of commercial multi-channel high-speed ADCs for the upgrade of the ATLAS LAr calorimeter. It is applicable to other collider physics experiments where radiation tolerance is required as well.« less

  2. Development of an ADC radiation tolerance characterization system for the upgrade of the ATLAS LAr calorimeter

    DOE PAGES

    Liu, Hong-Bin; Chen, Hu-Cheng; Chen, Kai; ...

    2017-02-01

    ATLAS LAr calorimeter will undergo its Phase-I upgrade during the long shutdown (LS2) in 2018, and a new LAr Trigger Digitizer Board (LTDB) will be designed and installed. Several commercial-off-the-shelf (COTS) multi-channel high-speed ADCs have been selected as possible backups of the radiation tolerant ADC ASICs for the LTDB. Here, to evaluate the radiation tolerance of these backup commercial ADCs, we developed an ADC radiation tolerance characterization system, which includes the ADC boards, data acquisition (DAQ) board, signal generator, external power supplies and a host computer. The ADC board is custom designed for different ADCs, with ADC drivers and clockmore » distribution circuits integrated on board. The Xilinx ZC706 FPGA development board is used as a DAQ board. The data from the ADC are routed to the FPGA through the FMC (FPGA Mezzanine Card) connector, de-serialized and monitored by the FPGA, and then transmitted to the host computer through the Gigabit Ethernet. A software program has been developed with Python, and all the commands are sent to the DAQ board through Gigabit Ethernet by this program. Two ADC boards have been designed for the ADC, ADS52J90 from Texas Instruments and AD9249 from Analog Devices respectively. TID tests for both ADCs have been performed at BNL, and an SEE test for the ADS52J90 has been performed at Massachusetts General Hospital (MGH). Test results have been analyzed and presented. The test results demonstrate that this test system is very versatile, and works well for the radiation tolerance characterization of commercial multi-channel high-speed ADCs for the upgrade of the ATLAS LAr calorimeter. It is applicable to other collider physics experiments where radiation tolerance is required as well.« less

  3. Development of an ADC radiation tolerance characterization system for the upgrade of the ATLAS LAr calorimeter

    NASA Astrophysics Data System (ADS)

    Liu, Hong-Bin; Chen, Hu-Cheng; Chen, Kai; Kierstead, James; Lanni, Francesco; Takai, Helio; Jin, Ge

    2017-02-01

    ATLAS LAr calorimeter will undergo its Phase-I upgrade during the long shutdown (LS2) in 2018, and a new LAr Trigger Digitizer Board (LTDB) will be designed and installed. Several commercial-off-the-shelf (COTS) multi-channel high-speed ADCs have been selected as possible backups of the radiation tolerant ADC ASICs for the LTDB. To evaluate the radiation tolerance of these backup commercial ADCs, we developed an ADC radiation tolerance characterization system, which includes the ADC boards, data acquisition (DAQ) board, signal generator, external power supplies and a host computer. The ADC board is custom designed for different ADCs, with ADC drivers and clock distribution circuits integrated on board. The Xilinx ZC706 FPGA development board is used as a DAQ board. The data from the ADC are routed to the FPGA through the FMC (FPGA Mezzanine Card) connector, de-serialized and monitored by the FPGA, and then transmitted to the host computer through the Gigabit Ethernet. A software program has been developed with Python, and all the commands are sent to the DAQ board through Gigabit Ethernet by this program. Two ADC boards have been designed for the ADC, ADS52J90 from Texas Instruments and AD9249 from Analog Devices respectively. TID tests for both ADCs have been performed at BNL, and an SEE test for the ADS52J90 has been performed at Massachusetts General Hospital (MGH). Test results have been analyzed and presented. The test results demonstrate that this test system is very versatile, and works well for the radiation tolerance characterization of commercial multi-channel high-speed ADCs for the upgrade of the ATLAS LAr calorimeter. It is applicable to other collider physics experiments where radiation tolerance is required as well. Supported by the U. S. Department of Energy (DE-SC001270)

  4. New Software Architecture Options for the TCL Data Acquisition System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valenton, Emmanuel

    2014-09-01

    The Turbulent Combustion Laboratory (TCL) conducts research on combustion in turbulent flow environments. To conduct this research, the TCL utilizes several pulse lasers, a traversable wind tunnel, flow controllers, scientific grade CCD cameras, and numerous other components. Responsible for managing these different data-acquiring instruments and data processing components is the Data Acquisition (DAQ) software. However, the current system is constrained to running through VXI hardware—an instrument-computer interface—that is several years old, requiring the use of an outdated version of the visual programming language, LabVIEW. A new Acquisition System is being programmed which will borrow heavily from either a programming modelmore » known as the Current Value Table (CVT) System or another model known as the Server-Client System. The CVT System model is in essence, a giant spread sheet from which data or commands may be retrieved or written to, and the Server-Client System is based on network connections between a server and a client, very much like the Server-Client model of the Internet. Currently, the bare elements of a CVT DAQ Software have been implemented, consisting of client programs in addition to a server program that the CVT will run on. This system is being rigorously tested to evaluate the merits of pursuing the CVT System model and to uncover any potential flaws which may result in further implementation. If the CVT System is chosen, which is likely, then future work will consist of build up the system until enough client programs have been created to run the individual components of the lab. The advantages of such a System will be flexibility, portability, and polymorphism. Additionally, the new DAQ software will allow the Lab to replace the VXI with a newer instrument interface—the PXI—and take advantage of the capabilities of current and future versions of LabVIEW.« less

  5. High-Throughput and Low-Latency Network Communication with NetIO

    NASA Astrophysics Data System (ADS)

    Schumacher, Jörn; Plessl, Christian; Vandelli, Wainer

    2017-10-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low- latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS exclusively target the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs, but it requires a non-negligible effort and expert knowledge. At the same time, message services like ZeroMQ have gained popularity in the HEP community. They make it possible to build distributed applications with a high-level approach and provide good performance. Unfortunately, their usage usually limits developers to TCP/IP- based networks. While it is possible to operate a TCP/IP stack on top of Infiniband and OmniPath, this approach may not be very efficient compared to a direct use of native APIs. NetIO is a simple, novel asynchronous message service that can operate on Ethernet, Infiniband and similar network fabrics. In this paper the design and implementation of NetIO is presented and described, and its use is evaluated in comparison to other approaches. NetIO supports different high-level programming models and typical workloads of HEP applications. The ATLAS FELIX project [1] successfully uses NetIO as its central communication platform. The architecture of NetIO is described in this paper, including the user-level API and the internal data-flow design. The paper includes a performance evaluation of NetIO including throughput and latency measurements. The performance is compared against the state-of-the- art ZeroMQ message service. Performance measurements are performed in a lab environment with Ethernet and FDR Infiniband networks.

  6. National surveys on internal quality control for blood gas analysis and related electrolytes in clinical laboratories of China.

    PubMed

    Duan, Min; Wang, Wei; Zhao, Haijian; Zhang, Chuanbao; He, Falin; Zhong, Kun; Yuan, Shuai; Wang, Zhiguo

    2018-05-01

    Internal quality control (IQC) is essential for precision evaluation and continuous quality improvement. This study aims to investigate the IQC status of blood gas analysis (BGA) in clinical laboratories of China from 2014 to 2017. IQC information on BGA (including pH, pCO2, pO2, Na+, K+, Ca2+, Cl-) was submitted by external quality assessment (EQA) participant laboratories and collected through Clinet-EQA reporting system in March from 2014 to 2017. First, current CVs were compared among different years and measurement systems. Then, percentages of laboratories meeting five allowable imprecision specifications for each analyte were calculated, respectively. Finally, laboratories were divided into different groups based on control rules and frequency to compare their variation trend. The current CVs of BGA were significantly decreasing from 2014 to 2017. pH and pCO2 got the highest pass rates when compared with the minimum imprecision specification, whereas pO2, Na+, K+, Ca2+, Cl- got the highest pass rates when 1/3 TEa imprecision specification applied. The pass rates of pH, pO2, Na+, K+, Ca2+, Cl- were significantly increasing during the 4 years. The comparisons of current CVs among different measurement systems showed that the precision performance of different analytes among different measurement systems had no regular distribution from 2014 to 2017. The analysis of IQC practice indicated great progress and improvement among different years. The imprecision performance of BGA has improved from 2014 to 2017, but the status of imprecision performance in China remains unsatisfying. Therefore, further investigation and continuous improvement measures should be taken.

  7. Moving standard deviation and moving sum of outliers as quality tools for monitoring analytical precision.

    PubMed

    Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-01

    An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. Formal analysis of imprecise system requirements with Event-B.

    PubMed

    Le, Hong Anh; Nakajima, Shin; Truong, Ninh Thuan

    2016-01-01

    Formal analysis of functional properties of system requirements needs precise descriptions. However, the stakeholders sometimes describe the system with ambiguous, vague or fuzzy terms, hence formal frameworks for modeling and verifying such requirements are desirable. The Fuzzy If-Then rules have been used for imprecise requirements representation, but verifying their functional properties still needs new methods. In this paper, we propose a refinement-based modeling approach for specification and verification of such requirements. First, we introduce a representation of imprecise requirements in the set theory. Then we make use of Event-B refinement providing a set of translation rules from Fuzzy If-Then rules to Event-B notations. After that, we show how to verify both safety and eventuality properties with RODIN/Event-B. Finally, we illustrate the proposed method on the example of Crane Controller.

  9. Increase in dance imprecision with decreasing foraging distance in the honey bee Apis mellifera L. is partly explained by physical constraints.

    PubMed

    Beekman, Madeleine; Doyen, Laurent; Oldroyd, Benjamin P

    2005-12-01

    Honey bee foragers communicate the direction and distance of both food sources and new nest sites to nest mates by means of a symbolic dance language. Interestingly, the precision by which dancers transfer directional information is negatively correlated with the distance to the advertised food source. The 'tuned-error' hypothesis suggests that colonies benefit from this imprecision as it spreads recruits out over a patch of constant size irrespective of the distance to the advertised site. An alternative to the tuned-error hypothesis is that dancers are physically incapable of dancing with great precision for nearby sources. Here we revisit the tuned-error hypothesis by studying the change in dance precision with increasing foraging distance over relatively short distances while controlling for environmental influences. We show that bees indeed increase their dance precision with the increase in foraging distance. However, we also show that dance performed by swarm-scouts for a nearby (30 m) nest site, where there could be no benefit to imprecision, are either without or with only limited directional information. This result suggests that imprecision in dance communication is caused primarily by physical constraints in the ability of dancers to turn around quickly enough when the advertised site is nearby.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young Jin

    The PowerPoint presentation focused on research goals, specific information about the atomic magnetometer, response and resolution factors of the SERF magnetometer, FC+AM systems, tests of field transfer and resolution on FC, gradient cancellation, testing of AM performance, ideas for a multi-channel AM, including preliminary sensitivity testing, and a description of a 6 channel DAQ system. A few ideas for future work ended the presentation.

  11. 78 FR 50079 - Information Collection Activities: Safety and Environmental Management Systems (SEMS); Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-16

    ... DEPARTMENT OF THE INTERIOR Bureau of Safety and Environmental Enforcement [Docket ID BSEE-2013-0005; OMB Control Number 1014-0017: 134E1700D2 EEEE500000 ET1SF0000.DAQ000] Information Collection Activities: Safety and Environmental Management Systems (SEMS); Proposed Collection; Comment Request Correction In notice document 2013-19416 appearing o...

  12. [Research on the Application of Fuzzy Logic to Systems Analysis and Control

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Research conducted with the support of NASA Grant NCC2-275 has been focused in the main on the development of fuzzy logic and soft computing methodologies and their applications to systems analysis and control. with emphasis 011 problem areas which are of relevance to NASA's missions. One of the principal results of our research has been the development of a new methodology called Computing with Words (CW). Basically, in CW words drawn from a natural language are employed in place of numbers for computing and reasoning. There are two major imperatives for computing with words. First, computing with words is a necessity when the available information is too imprecise to justify the use of numbers, and second, when there is a tolerance for imprecision which can be exploited to achieve tractability, robustness, low solution cost, and better rapport with reality. Exploitation of the tolerance for imprecision is an issue of central importance in CW.

  13. Excluding joint probabilities from quantum theory

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  14. The Role of Type and Source of Uncertainty on the Processing of Climate Models Projections.

    PubMed

    Benjamin, Daniel M; Budescu, David V

    2018-01-01

    Scientists agree that the climate is changing due to human activities, but there is less agreement about the specific consequences and their timeline. Disagreement among climate projections is attributable to the complexity of climate models that differ in their structure, parameters, initial conditions, etc. We examine how different sources of uncertainty affect people's interpretation of, and reaction to, information about climate change by presenting participants forecasts from multiple experts. Participants viewed three types of sets of sea-level rise projections: (1) precise, but conflicting ; (2) imprecise , but agreeing, and (3) hybrid that were both conflicting and imprecise. They estimated the most likely sea-level rise, provided a range of possible values and rated the sets on several features - ambiguity, credibility, completeness, etc. In Study 1, everyone saw the same hybrid set. We found that participants were sensitive to uncertainty between sources, but not to uncertainty about which model was used. The impacts of conflict and imprecision were combined for estimation tasks and compromised for feature ratings . Estimates were closer to the experts' original projections, and sets were rated more favorably under imprecision. Estimates were least consistent with (narrower than) the experts in the hybrid condition, but participants rated the conflicting set least favorably. In Study 2, we investigated the hybrid case in more detail by creating several distinct interval sets that combine conflict and imprecision. Two factors drive perceptual differences: overlap - the structure of the forecast set (whether intersecting, nested, tangent, or disjoint) - and a symmetry - the balance of the set. Estimates were primarily driven by asymmetry, and preferences were primarily driven by overlap. Asymmetric sets were least consistent with the experts: estimated ranges were narrower, and estimates of the most likely value were shifted further below the set mean. Intersecting and nested sets were rated similarly to imprecision, and ratings of disjoint and tangent sets were rated like conflict. Our goal was to determine which underlying factors of information sets drive perceptions of uncertainty in consistent, predictable ways. The two studies lead us to conclude that perceptions of agreement require intersection and balance, and overly precise forecasts lead to greater perceptions of disagreement and a greater likelihood of the public discrediting and misinterpreting information.

  15. The Role of Type and Source of Uncertainty on the Processing of Climate Models Projections

    PubMed Central

    Benjamin, Daniel M.; Budescu, David V.

    2018-01-01

    Scientists agree that the climate is changing due to human activities, but there is less agreement about the specific consequences and their timeline. Disagreement among climate projections is attributable to the complexity of climate models that differ in their structure, parameters, initial conditions, etc. We examine how different sources of uncertainty affect people’s interpretation of, and reaction to, information about climate change by presenting participants forecasts from multiple experts. Participants viewed three types of sets of sea-level rise projections: (1) precise, but conflicting; (2) imprecise, but agreeing, and (3) hybrid that were both conflicting and imprecise. They estimated the most likely sea-level rise, provided a range of possible values and rated the sets on several features – ambiguity, credibility, completeness, etc. In Study 1, everyone saw the same hybrid set. We found that participants were sensitive to uncertainty between sources, but not to uncertainty about which model was used. The impacts of conflict and imprecision were combined for estimation tasks and compromised for feature ratings. Estimates were closer to the experts’ original projections, and sets were rated more favorably under imprecision. Estimates were least consistent with (narrower than) the experts in the hybrid condition, but participants rated the conflicting set least favorably. In Study 2, we investigated the hybrid case in more detail by creating several distinct interval sets that combine conflict and imprecision. Two factors drive perceptual differences: overlap – the structure of the forecast set (whether intersecting, nested, tangent, or disjoint) – and asymmetry – the balance of the set. Estimates were primarily driven by asymmetry, and preferences were primarily driven by overlap. Asymmetric sets were least consistent with the experts: estimated ranges were narrower, and estimates of the most likely value were shifted further below the set mean. Intersecting and nested sets were rated similarly to imprecision, and ratings of disjoint and tangent sets were rated like conflict. Our goal was to determine which underlying factors of information sets drive perceptions of uncertainty in consistent, predictable ways. The two studies lead us to conclude that perceptions of agreement require intersection and balance, and overly precise forecasts lead to greater perceptions of disagreement and a greater likelihood of the public discrediting and misinterpreting information. PMID:29636717

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouaichaoui, Youcef; Berrahal, Abderezak; Halbaoui, Khaled

    This paper describes the design of data acquisition system (DAQ) that is connected to a PC and development of a feedback control system that maintains the coolant temperature of the process at a desired set point using a digital controller system based on the graphical programming language. The paper will provide details about the data acquisition unit, shows the implementation of the controller, and present test results. (authors)

  17. Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition.

    PubMed

    Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti

    2017-05-01

    Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.

  18. A DAQ-Device-Based Continuous Wave Near-Infrared Spectroscopy System for Measuring Human Functional Brain Activity

    PubMed Central

    Li, Xiaoli; Liu, Xiaomin

    2014-01-01

    In the last two decades, functional near-infrared spectroscopy (fNIRS) is getting more and more popular as a neuroimaging technique. The fNIRS instrument can be used to measure local hemodynamic response, which indirectly reflects the functional neural activities in human brain. In this study, an easily implemented way to establish DAQ-device-based fNIRS system was proposed. Basic instrumentation components (light sources driving, signal conditioning, sensors, and optical fiber) of the fNIRS system were described. The digital in-phase and quadrature demodulation method was applied in LabVIEW software to distinguish light sources from different emitters. The effectiveness of the custom-made system was verified by simultaneous measurement with a commercial instrument ETG-4000 during Valsalva maneuver experiment. The light intensity data acquired from two systems were highly correlated for lower wavelength (Pearson's correlation coefficient r = 0.92, P < 0.01) and higher wavelength (r = 0.84, P < 0.01). Further, another mental arithmetic experiment was implemented to detect neural activation in the prefrontal cortex. For 9 participants, significant cerebral activation was detected in 6 subjects (P < 0.05) for oxyhemoglobin and in 8 subjects (P < 0.01) for deoxyhemoglobin. PMID:25180044

  19. Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition

    NASA Astrophysics Data System (ADS)

    Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti

    2017-05-01

    Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.

  20. Data Acquisition Backbone Core DABC release v1.0

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, J.; Essel, H. G.; Kurz, N.; Linev, S.

    2010-04-01

    The Data Acquisition Backbone Core (DABC) is a general purpose software framework designed for the implementation of a wide-range of data acquisition systems - from various small detector test beds to high performance systems. DABC consists of a compact data-flow kernel and a number of plug-ins for various functional components like data inputs, device drivers, user functional modules and applications. DABC provides configurable components for implementing event building over fast networks like InfiniBand or Gigabit Ethernet. A generic Java GUI provides the dynamic control and visualization of control parameters and commands, provided by DIM servers. A first set of application plug-ins has been implemented to use DABC as event builder for the front-end components of the GSI standard DAQ system MBS (Multi Branch System). Another application covers the connection to DAQ readout chains from detector front-end boards (N-XYTER) linked to read-out controller boards (ROC) over UDP into DABC for event building, archiving and data serving. This was applied for data taking in the September 2008 test beamtime for the CBM experiment at GSI. DABC version 1.0 is released and available from the website.

  1. Pedemis: a portable electromagnetic induction sensor with integrated positioning

    NASA Astrophysics Data System (ADS)

    Barrowes, Benjamin E.; Shubitidze, Fridon; Grzegorczyk, Tomasz M.; Fernández, Pablo; O'Neill, Kevin

    2012-06-01

    Pedemis (PortablE Decoupled Electromagnetic Induction Sensor) is a time-domain handheld electromagnetic induction (EMI) instrument with the intended purpose of improving the detection and classification of UneXploded Ordnance (UXO). Pedemis sports nine coplanar transmitters (the Tx assembly) and nine triaxial receivers held in a fixed geometry with respect to each other (the Rx assembly) but with that Rx assembly physically decoupled from the Tx assembly allowing flexible data acquisition modes and deployment options. The data acquisition (DAQ) electronics consists of the National Instruments (NI) cRIO platform which is much lighter and more energy efficient that prior DAQ platforms. Pedemis has successfully acquired initial data, and inversion of the data acquired during these initial tests has yielded satisfactory polarizabilities of a spherical target. In addition, precise positioning of the Rx assembly has been achieved via position inversion algorithms based solely on the data acquired from the receivers during the "on-time" of the primary field. Pedemis has been designed to be a flexible yet user friendly EMI instrument that can survey, detect and classify targets in a one pass solution. In this paper, the Pedemis instrument is introduced along with its operation protocols, initial data results, and current status.

  2. Implement an adjustable delay time digital trigger for an NI data acquisition card in a high-speed demodulation system

    NASA Astrophysics Data System (ADS)

    Zhang, Hongtao; Fan, Lingling; Wang, Pengfei; Park, Seong-Wook

    2012-06-01

    A National Instruments (NI) DAQ card PCI 5105 is installed in a high-speed demodulation system based on Fiber Fabry-Pérot Tunable Filter. The instability of the spectra of Fiber Bragg Grating sensors caused by intrinsic drifts of FFP-TF needs an appropriate, flexible trigger. However, the driver of the DAQ card in the current development environment does not provide the functions of analog trigger but digital trigger type. Moreover, the high level of the trigger signal from the tuning voltage of FFP-TF is larger than the maximum input overload voltage of PCI 5105 card. To resolve this incompatibility, a novel converter to change an analog trigger signal into a digital trigger signal has been reported previously. However, the obvious delay time between input and output signals limits the function of demodulation system. Accordingly, we report an improved low-cost, small-size converter with an adjustable delay time. This new scheme can decline the delay time to or close to zero when the frequency of trigger signal is less than 3,000 Hz. This method might be employed to resolve similar problems or to be applied in semiconductor integrated circuits.

  3. Data Quality Monitoring System for New GEM Muon Detectors for the CMS Experiment Upgrade

    NASA Astrophysics Data System (ADS)

    King, Robert; CMS Muon Group Team

    2017-01-01

    The Gas Electron Multiplier (GEM) detectors are novel detectors designed to improve the muon trigger and tracking performance in CMS experiment for the high luminosity upgrade of the LHC. Partial installation of GEM detectors is planned during the 2016-2017 technical stop. Before the GEM system is installed underground, its data acquisition (DAQ) electronics must be thoroughly tested. The DAQ system includes several commercial and custom-built electronic boards running custom firmware. The front-end electronics are radiation-hard and communicate via optical fibers. The data quality monitoring (DQM) software framework has been designed to provide online verification of the integrity of the data produced by the detector electronics, and to promptly identify potential hardware or firmware malfunctions in the system. Local hits reconstruction and clustering algorithms allow quality control of the data produced by each GEM chamber. Once the new detectors are installed, the DQM will monitor the stability and performance of the system during normal data-taking operations. We discuss the design of the DQM system, the software being developed to read out and process the detector data, and the methods used to identify and report hardware and firmware malfunctions of the system.

  4. Removal of anti-Stokes emission background in STED microscopy by FPGA-based synchronous detection

    NASA Astrophysics Data System (ADS)

    Castello, M.; Tortarolo, G.; Coto Hernández, I.; Deguchi, T.; Diaspro, A.; Vicidomini, G.

    2017-05-01

    In stimulated emission depletion (STED) microscopy, the role of the STED beam is to de-excite, via stimulated emission, the fluorophores that have been previously excited by the excitation beam. This condition, together with specific beam intensity distributions, allows obtaining true sub-diffraction spatial resolution images. However, if the STED beam has a non-negligible probability to excite the fluorophores, a strong fluorescent background signal (anti-Stokes emission) reduces the effective resolution. For STED scanning microscopy, different synchronous detection methods have been proposed to remove this anti-Stokes emission background and recover the resolution. However, every method works only for a specific STED microscopy implementation. Here we present a user-friendly synchronous detection method compatible with any STED scanning microscope. It exploits a data acquisition (DAQ) card based on a field-programmable gate array (FPGA), which is progressively used in STED microscopy. In essence, the FPGA-based DAQ card synchronizes the fluorescent signal registration, the beam deflection, and the excitation beam interruption, providing a fully automatic pixel-by-pixel synchronous detection method. We validate the proposed method in both continuous wave and pulsed STED microscope systems.

  5. First Results from the Telescope Array RAdar (TARA) Detector

    NASA Astrophysics Data System (ADS)

    Myers, Isaac

    2014-03-01

    The TARA cosmic ray detector has been in operation for about a year and a half. This bi-static radar detector was designed with the goal of detecting cosmic rays in coincidence with Telescope Array (TA). A new high power (25 kW, 5 MW effective radiated power) transmitter and antenna array and 250 MHz fPGA-based DAQ have been operational since August 2013. The eight-Yagi antenna array broadcasts a 54.1 MHz tone across the TA surface detector array toward our receiver station 50 km away at the Long Ridge fluorescence detector. Receiving antennas feed an intelligent DAQ that self-adjusts to the fluctuating radio background and which employs a bank of matched filters that search in real-time for chirp radar echoes. Millions of triggers have been collected in this mode. A second mode is a forced trigger scheme that uses the trigger status of the fluorescence telescope. Of those triggers collected in FD-triggered mode, about 800 correspond with well-reconstructed TA events. I will describe recent advancements in calibrating key components in the transmitter and receiver RF chains and the analysis of FD-triggered data. Work supported by W.M. Keck Foundation and NSF.

  6. Real-Time Data Streaming and Storing Structure for the LHD's Fusion Plasma Experiments

    NASA Astrophysics Data System (ADS)

    Nakanishi, Hideya; Ohsuna, Masaki; Kojima, Mamoru; Imazu, Setsuo; Nonomura, Miki; Emoto, Masahiko; Yoshida, Masanobu; Iwata, Chie; Ida, Katsumi

    2016-02-01

    The LHD data acquisition and archiving system, i.e., LABCOM system, has been fully equipped with high-speed real-time acquisition, streaming, and storage capabilities. To deal with more than 100 MB/s continuously generated data at each data acquisition (DAQ) node, DAQ tasks have been implemented as multitasking and multithreaded ones in which the shared memory plays the most important role for inter-process fast and massive data handling. By introducing a 10-second time chunk named “subshot,” endless data streams can be stored into a consecutive series of fixed length data blocks so that they will soon become readable by other processes even while the write process is continuing. Real-time device and environmental monitoring are also implemented in the same way with further sparse resampling. The central data storage has been separated into two layers to be capable of receiving multiple 100 MB/s inflows in parallel. For the frontend layer, high-speed SSD arrays are used as the GlusterFS distributed filesystem which can provide max. 2 GB/s throughput. Those design optimizations would be informative for implementing the next-generation data archiving system in big physics, such as ITER.

  7. Evaluation of the technical performance of novel holotranscobalamin (holoTC) assays in a multicenter European demonstration project.

    PubMed

    Morkbak, Anne L; Heimdal, Randi M; Emmens, Kathleen; Molloy, Anne; Hvas, Anne-Mette; Schneede, Joern; Clarke, Robert; Scott, John M; Ueland, Per M; Nexo, Ebba

    2005-01-01

    A commercially available holotranscobalamin (holo-TC) radioimmunoassay (RIA) (Axis-Shield, Dundee, Scotland) was evaluated in four laboratories and compared with a holoTC ELISA run in one laboratory. The performance of the holoTC RIA assay was comparable in three of the four participating laboratories. The results from these three laboratories, involving at least 20 initial runs of "low", "medium" and "high" serum-based controls (mean holoTC concentrations 34, 60 and 110 pmol/L, respectively) yielded an intra-laboratory imprecision of 6-10%. No systematic inter-laboratory deviations were observed on runs involving 72 patient samples (holoTC concentration range 10-160 pmol/L). A fourth laboratory demonstrated higher assay imprecision for control samples and systematic deviation of results for the patient samples. Measurement of holoTC by ELISA showed an imprecision of 4-5%, and slightly higher mean values for the controls (mean holoTC concentrations 40, 70 and 114 pmol/L, respectively). Comparable results were obtained for the patient samples. The long-term intra-laboratory imprecision was 12% for the holoTC RIA and 6% for the ELISA. In conclusion, it would be prudent to check the calibration and precision prior to starting to use these holoTC assays in research or clinical practice. The results obtained using the holoTC RIA were similar to those obtained using the holoTC ELISA assay.

  8. Evaluation of the Olympus AU-510 analyser.

    PubMed

    Farré, C; Velasco, J; Ramón, F

    1991-01-01

    The selective multitest Olympus AU-510 analyser was evaluated according to the recommendations of the Comision de Instrumentacion de la Sociedad Española de Quimica Clinica and the European Committee for Clinical Laboratory Standards. The evaluation was carried out in two stages: an examination of the analytical units and then an evaluation in routine work conditions. The operational characteristics of the system were also studied.THE FIRST STAGE INCLUDED A PHOTOMETRIC STUDY: dependent on the absorbance, the inaccuracy varies between +0.5% to -0.6% at 405 nm and from -5.6% to 10.6% at 340 nm; the imprecision ranges between -0.22% and 0.56% at 405 nm and between 0.09% and 2.74% at 340 nm. Linearity was acceptable, apart from a very low absorbance for NADH at 340 nm; and the imprecision of the serum sample pipetter was satisfactory.TWELVE SERUM ANALYTES WERE STUDIED UNDER ROUTINE CONDITIONS: glucose, urea urate, cholesterol, triglycerides, total bilirubin, creatinine, phosphate, iron, aspartate aminotransferase, alanine aminotransferase and gamma-glutamyl transferase.The within-run imprecision (CV%) ranged from 0.67% for phosphate to 2.89% for iron and the between-run imprecision from 0.97% for total bilirubin to 7.06% for iron. There was no carryover in a study of the serum sample pipetter. Carry-over studies with the reagent and sample pipetters shows some cross contamination in the iron assay.

  9. A new array for the study of ultra high energy gamma-ray sources

    NASA Technical Reports Server (NTRS)

    Brooke, G.; Lambert, A.; Ogden, P. A.; Patel, M.; Ferrett, J. C.; Reid, R. J. O.; Watson, A. A.; West, A. A.

    1985-01-01

    The design and operation of a 32 x 1 10 to the 15th power sq m array of scintillation detectors for the detection of 10 to the 15th power eV cosmic rays is described with an expected angular resolution of 1 deg, thus improving the present signal/background ratio for gamma ray sources. Data are recorded on a hybrid CAMAC, an in-house system which uses a laser and Pockel-Cell arrangement to routinely calibrate the timing stability of the detectors.

  10. Proceedings of the workshop on B physics at hadron accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McBride, P.; Mishra, C.S.

    1993-12-31

    This report contains papers on the following topics: Measurement of Angle {alpha}; Measurement of Angle {beta}; Measurement of Angle {gamma}; Other B Physics; Theory of Heavy Flavors; Charged Particle Tracking and Vertexing; e and {gamma} Detection; Muon Detection; Hadron ID; Electronics, DAQ, and Computing; and Machine Detector Interface. Selected papers have been indexed separately for inclusion the in Energy Science and Technology Database.

  11. Imprecise (fuzzy) information in geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardossy, A.; Bogardi, I.; Kelly, W.E.

    1988-05-01

    A methodology based on fuzzy set theory for the utilization of imprecise data in geostatistics is presented. A common problem preventing a broader use of geostatistics has been the insufficient amount of accurate measurement data. In certain cases, additional but uncertain (soft) information is available and can be encoded as subjective probabilities, and then the soft kriging method can be applied (Journal, 1986). In other cases, a fuzzy encoding of soft information may be more realistic and simplify the numerical calculations. Imprecise (fuzzy) spatial information on the possible variogram is integrated into a single variogram which is used in amore » fuzzy kriging procedure. The overall uncertainty of prediction is represented by the estimation variance and the calculated membership function for each kriged point. The methodology is applied to the permeability prediction of a soil liner for hazardous waste containment. The available number of hard measurement data (20) was not enough for a classical geostatistical analysis. An additional 20 soft data made it possible to prepare kriged contour maps using the fuzzy geostatistical procedure.« less

  12. A fuzzy Bayesian approach to flood frequency estimation with imprecise historical information

    PubMed Central

    Kiss, Andrea; Viglione, Alberto; Viertl, Reinhard; Blöschl, Günter

    2016-01-01

    Abstract This paper presents a novel framework that links imprecision (through a fuzzy approach) and stochastic uncertainty (through a Bayesian approach) in estimating flood probabilities from historical flood information and systematic flood discharge data. The method exploits the linguistic characteristics of historical source material to construct membership functions, which may be wider or narrower, depending on the vagueness of the statements. The membership functions are either included in the prior distribution or the likelihood function to obtain a fuzzy version of the flood frequency curve. The viability of the approach is demonstrated by three case studies that differ in terms of their hydromorphological conditions (from an Alpine river with bedrock profile to a flat lowland river with extensive flood plains) and historical source material (including narratives, town and county meeting protocols, flood marks and damage accounts). The case studies are presented in order of increasing fuzziness (the Rhine at Basel, Switzerland; the Werra at Meiningen, Germany; and the Tisza at Szeged, Hungary). Incorporating imprecise historical information is found to reduce the range between the 5% and 95% Bayesian credibility bounds of the 100 year floods by 45% and 61% for the Rhine and Werra case studies, respectively. The strengths and limitations of the framework are discussed relative to alternative (non‐fuzzy) methods. The fuzzy Bayesian inference framework provides a flexible methodology that fits the imprecise nature of linguistic information on historical floods as available in historical written documentation. PMID:27840456

  13. A fuzzy Bayesian approach to flood frequency estimation with imprecise historical information

    NASA Astrophysics Data System (ADS)

    Salinas, José Luis; Kiss, Andrea; Viglione, Alberto; Viertl, Reinhard; Blöschl, Günter

    2016-09-01

    This paper presents a novel framework that links imprecision (through a fuzzy approach) and stochastic uncertainty (through a Bayesian approach) in estimating flood probabilities from historical flood information and systematic flood discharge data. The method exploits the linguistic characteristics of historical source material to construct membership functions, which may be wider or narrower, depending on the vagueness of the statements. The membership functions are either included in the prior distribution or the likelihood function to obtain a fuzzy version of the flood frequency curve. The viability of the approach is demonstrated by three case studies that differ in terms of their hydromorphological conditions (from an Alpine river with bedrock profile to a flat lowland river with extensive flood plains) and historical source material (including narratives, town and county meeting protocols, flood marks and damage accounts). The case studies are presented in order of increasing fuzziness (the Rhine at Basel, Switzerland; the Werra at Meiningen, Germany; and the Tisza at Szeged, Hungary). Incorporating imprecise historical information is found to reduce the range between the 5% and 95% Bayesian credibility bounds of the 100 year floods by 45% and 61% for the Rhine and Werra case studies, respectively. The strengths and limitations of the framework are discussed relative to alternative (non-fuzzy) methods. The fuzzy Bayesian inference framework provides a flexible methodology that fits the imprecise nature of linguistic information on historical floods as available in historical written documentation.

  14. Characterization and Validation of the LT-SYS Copper Assay on a Roche Cobas 8000 c502 Analyzer.

    PubMed

    Kraus, F Bernhard; Mischereit, Marlies; Eller, Christoph; Ludwig-Kraus, Beatrice

    2017-02-01

    Validation of the LT-SYS quantitative in vitro copper assay on a Roche Cobas 8000 c502 analyzer and comparison with a BIOMED assay on a Roche Cobas Mira analyzer. Imprecision and bias were quantified at different concentration levels (serum and plasma) over a 20-day period. Linearity was assessed covering a range from 4.08 µmol/L to 33.8 µmol/L. Limit of blank (LoB) and limit of detection (LoD) were established based on a total of 120 blank and low-level samples. The method comparison was based on 58 plasma samples. Within-run imprecision ranged from 0.7% to 1.2% and within-laboratory imprecision from 1.4% to 3.3%. Relative bias for the 2 serum pools with known target values was less than 2.5%. The assay did not deviate from linearity over the tested measuring range. LoB and LoD were 0.12 µmol/L and 0.23 µmol/L, respectively. The method comparison revealed an average deviation of 11.5% (2.016 µmol/L), and the linear regression fit was y = 1.464 + 0.795x. The LT-SYS copper assay characterized in this study showed a fully acceptable performance with good degrees of imprecision and bias, no deviation from linearity in the relevant measuring rangem, and very low LoB and LoD. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. A fuzzy Bayesian approach to flood frequency estimation with imprecise historical information.

    PubMed

    Salinas, José Luis; Kiss, Andrea; Viglione, Alberto; Viertl, Reinhard; Blöschl, Günter

    2016-09-01

    This paper presents a novel framework that links imprecision (through a fuzzy approach) and stochastic uncertainty (through a Bayesian approach) in estimating flood probabilities from historical flood information and systematic flood discharge data. The method exploits the linguistic characteristics of historical source material to construct membership functions, which may be wider or narrower, depending on the vagueness of the statements. The membership functions are either included in the prior distribution or the likelihood function to obtain a fuzzy version of the flood frequency curve. The viability of the approach is demonstrated by three case studies that differ in terms of their hydromorphological conditions (from an Alpine river with bedrock profile to a flat lowland river with extensive flood plains) and historical source material (including narratives, town and county meeting protocols, flood marks and damage accounts). The case studies are presented in order of increasing fuzziness (the Rhine at Basel, Switzerland; the Werra at Meiningen, Germany; and the Tisza at Szeged, Hungary). Incorporating imprecise historical information is found to reduce the range between the 5% and 95% Bayesian credibility bounds of the 100 year floods by 45% and 61% for the Rhine and Werra case studies, respectively. The strengths and limitations of the framework are discussed relative to alternative (non-fuzzy) methods. The fuzzy Bayesian inference framework provides a flexible methodology that fits the imprecise nature of linguistic information on historical floods as available in historical written documentation.

  16. The development of algorithms for the deployment of new version of GEM-detector-based acquisition system

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał D.; Czarski, Tomasz; Kolasiński, Piotr; Linczuk, Paweł; Poźniak, Krzysztof T.; Chernyshova, Maryna; Kasprowicz, Grzegorz; Wojeński, Andrzej; Zabolotny, Wojciech; Zienkiewicz, Paweł

    2016-09-01

    This article is an overview of what has been implemented in the process of development and testing the GEM detector based acquisition system in terms of post-processing algorithms. Information is given on mex functions for extended statistics collection, unified hex topology and optimized S-DAQ algorithm for splitting overlapped signals. Additional discussion on bottlenecks and major factors concerning optimization is presented.

  17. Characterization Techniques for a MEMS Electric-Field Sensor in Vacuum

    DTIC Science & Technology

    2012-01-01

    nected so that the noise contributions of the transimpedance amplifier and the digitizer may be determined. The raw voltage data, after processing...of Vrms/rtHz. The noise may be seen in terms of the device trans- duction physics, signal conditioning ( transimpedance amp), and DAQ. (right) Field...Sensor using Thermal Actua- tors with Mechanically Amplified Response,” Solid-State Sensors, Actuators and Microsystems Confer- ence, 2007. TRANSDUCERS

  18. Experimental Investigation and Numerical Predication of a Cross-Flow Fan

    DTIC Science & Technology

    2006-12-01

    Figure 3. Combination probes and pressure tap layout .....................................................6 Figure 4. CFF_DAQ graphical user interface...properties were United Sensor Devices model USD-C-161 3 mm (1/8-inch) combination thermocouple/pressure probes, and static pressure taps . The...was applied to the three static pressure tapes at the throat of the bell-mouth and to the two exhaust duct static pressure taps . Once the data

  19. Design of virtual display and testing system for moving mass electromechanical actuator

    NASA Astrophysics Data System (ADS)

    Gao, Zhigang; Geng, Keda; Zhou, Jun; Li, Peng

    2015-12-01

    Aiming at the problem of control, measurement and movement virtual display of moving mass electromechanical actuator(MMEA), the virtual testing system of MMEA was developed based on the PC-DAQ architecture and the software platform of LabVIEW, and the comprehensive test task such as drive control of MMEA, tests of kinematic parameter, measurement of centroid position and virtual display of movement could be accomplished. The system could solve the alignment for acquisition time between multiple measurement channels in different DAQ cards, then on this basis, the researches were focused on the dynamic 3D virtual display by the LabVIEW, and the virtual display of MMEA were realized by the method of calling DLL and the method of 3D graph drawing controls. Considering the collaboration with the virtual testing system, including the hardware drive, the measurement software of data acquisition, and the 3D graph drawing controls method was selected, which could obtained the synchronization measurement, control and display. The system can measure dynamic centroid position and kinematic position of movable mass block while controlling the MMEA, and the interface of 3D virtual display has realistic effect and motion smooth, which can solve the problem of display and playback about MMEA in the closed shell.

  20. The control system of the multi-strip ionization chamber for the HIMM

    NASA Astrophysics Data System (ADS)

    Li, Min; Yuan, Y. J.; Mao, R. S.; Xu, Z. G.; Li, Peng; Zhao, T. C.; Zhao, Z. L.; Zhang, Nong

    2015-03-01

    Heavy Ion Medical Machine (HIMM) is a carbon ion cancer treatment facility which is being built by the Institute of Modern Physics (IMP) in China. In this facility, transverse profile and intensity of the beam at the treatment terminals will be measured by the multi-strip ionization chamber. In order to fulfill the requirement of the beam position feedback to accomplish the beam automatic commissioning, less than 1 ms reaction time of the Data Acquisition (DAQ) of this detector must be achieved. Therefore, the control system and software framework for DAQ have been redesigned and developed with National Instruments Compact Reconfigurable Input/Output (CompactRIO) instead of PXI 6133. The software is Labview-based and developed following the producer-consumer pattern with message mechanism and queue technology. The newly designed control system has been tested with carbon beam at the Heavy Ion Research Facility at Lanzhou-Cooler Storage Ring (HIRFL-CSR) and it has provided one single beam profile measurement in less than 1 ms with 1 mm beam position resolution. The fast reaction time and high precision data processing during the beam test have verified the usability and maintainability of the software framework. Furthermore, such software architecture is easy-fitting to applications with different detectors such as wire scanner detector.

  1. YARR - A PCIe based Readout Concept for Current and Future ATLAS Pixel Modules

    NASA Astrophysics Data System (ADS)

    Heim, Timon

    2017-10-01

    The Yet Another Rapid Readout (YARR) system is a DAQ system designed for the readout of current generation ATLAS Pixel FE-I4 and next generation chips. It utilises a commercial-off-the-shelf PCIe FPGA card as a reconfigurable I/O interface, which acts as a simple gateway to pipe all data from the Pixel modules via the high speed PCIe connection into the host system’s memory. Relying on modern CPU architectures, which enables the usage of parallelised processing in threads and commercial high speed interfaces in everyday computers, it is possible to perform all processing on a software level in the host CPU. Although FPGAs are very powerful at parallel signal processing their firmware is hard to maintain and constrained by their connected hardware. Software, on the other hand, is very portable and upgraded frequently with new features coming at no cost. A DAQ concept which does not rely on the underlying hardware for acceleration also eases the transition from prototyping in the laboratory to the full scale implementation in the experiment. The overall concept and data flow will be outlined, as well as the challenges and possible bottlenecks which can be encountered when moving the processing from hardware to software.

  2. The Belle II Pixel Detector Data Acquisition and Background Suppression System

    NASA Astrophysics Data System (ADS)

    Lautenbach, K.; Deschamps, B.; Dingfelder, J.; Getzkow, D.; Geßler, T.; Konorov, I.; Kühn, W.; Lange, S.; Levit, D.; Liu, Z.-A.; Marinas, C.; Münchow, D.; Rabusov, A.; Reiter, S.; Spruck, B.; Wessel, C.; Zhao, J.

    2017-06-01

    The Belle II experiment at the future SuperKEKB collider in Tsukuba, Japan, features a design luminosity of 8 · 1035 cm-2s-1, which is a factor of 40 larger than that of its predecessor Belle. The pixel detector (PXD) with about 8 million pixels is based on the DEPFET technology and will improve the vertex resolution in beam direction by a factor of 2. With an estimated trigger rate of 30 kHz, the PXD is expected to generate a data rate of 20 GBytes/s, which is about 10 times larger than the amount of data generated by all other Belle II subdetectors. Due to the large beam-related background, the PXD requires a data acquisition system with high-bandwidth data links and realtime background reduction by a factor of 30. To achieve this, the Belle II pixel DAQ uses an FPGA-based computing platform with high speed serial links implemented in the ATCA (Advanced Telecommunications Computing Architecture) standard. The architecture and performance of the data acquisition system and data reduction of the PXD will be presented. In April 2016 and February 2017 a prototype PXD-DAQ system operated in a test beam campaign delivered data with the whole readout chain under realistic high rate conditions. Final results from the beam test will be presented.

  3. Multichannel FPGA-Based Data-Acquisition-System for Time-Resolved Synchrotron Radiation Experiments

    NASA Astrophysics Data System (ADS)

    Choe, Hyeokmin; Gorfman, Semen; Heidbrink, Stefan; Pietsch, Ullrich; Vogt, Marco; Winter, Jens; Ziolkowski, Michael

    2017-06-01

    The aim of this contribution is to describe our recent development of a novel compact field-programmable gatearray (FPGA)-based data acquisition (DAQ) system for use with multichannel X-ray detectors at synchrotron radiation facilities. The system is designed for time resolved counting of single photons arriving from several-currently 12-independent detector channels simultaneously. Detector signals of at least 2.8 ns duration are latched by asynchronous logic and then synchronized with the system clock of 100 MHz. The incoming signals are subsequently sorted out into 10 000 time-bins where they are counted. This occurs according to the arrival time of photons with respect to the trigger signal. Repeatable mode of triggered operation is used to achieve high statistic of accumulated counts. The time-bin width is adjustable from 10 ns to 1 ms. In addition, a special mode of operation with 2 ns time resolution is provided for two detector channels. The system is implemented in a pocketsize FPGA-based hardware of 10 cm × 10 cm × 3 cm and thus can easily be transported between synchrotron radiation facilities. For setup of operation and data read-out, the hardware is connected via USB interface to a portable control computer. DAQ applications are provided in both LabVIEW and MATLAB environments.

  4. Neighborhood noise pollution as a determinant of displaced aggression: a pilot study.

    PubMed

    Dzhambov, Angel; Dimitrova, Donka

    2014-01-01

    Noise pollution is still a growing public health problem with a significant impact on psychological health and well-being. The aim of this study was to investigate the impact of noise on displaced aggression (DA) in different subgroups of residents in one of the neighborhoods of Plovdiv city. A cross-sectional semi-structured interview survey was conducted using specially designed data registration forms and 33 close-ended and open-ended questions, divided into two major panels - one original and a modified version of the Displaced Aggression Questionnaire (DAQ). The mean score for DA was 61.12 (±19.97). Hearing noises above the perceived normal threshold, higher noise sensitivity and continuous noises were associated with higher levels of DA. Low frequency and high intensity noises were also associated with higher DA scores. Multiple regression model supported these findings. Contradictory to previous research age was positively correlated with noise sensitivity and aggression. We speculated that this might be due to the relatively lower socio-economic standard and quality of life in Bulgaria. Therefore, social climate might be modifying the way people perceive and react to environmental noise. Finally, the DAQ proved to be a viable measurement tool of these associations and might be further implemented and modified to suit the purposes of psychoacoustic assessment.

  5. Internal monitoring of GBTx emulator using IPbus for CBM experiment

    NASA Astrophysics Data System (ADS)

    Mandal, Swagata; Zabolotny, Wojciech; Sau, Suman; Chkrabarti, Amlan; Saini, Jogender; Chattopadhyay, Subhasis; Pal, Sushanta Kumar

    2015-09-01

    The Compressed Baryonic Matter (CBM) experiment is a part of the Facility for Antiproton and Ion Research (FAIR) in Darmstadt at GSI. In CBM experiment a precisely time synchronized fault tolerant self-triggered electronics is required for Data Acquisition (DAQ) system in CBM experiments which can support high data rate (up to several TB/s). As a part of the implementation of the DAQ system of Muon Chamber (MUCH) which is one of the important detectors in CBM experiment, a FPGA based Gigabit Transceiver (GBTx) emulator is implemented. Readout chain for MUCH consists of XYTER chips (Front end electronics) which will be directly connected to detector, GBTx emulator, Data Processing Board (DPB) and First level event selector board (FLIB) with backend software interface. GBTx emulator will be connected with the XYTER emulator through LVDS (Low Voltage Differential Signalling) line in the front end and in the back end it is connected with DPB through 4.8 Gbps optical link. IPBus over Ethernet is used for internal monitoring of the registers within the GBTx. In IPbus implementation User Datagram Protocol (UDP) stack is used in transport layer of OSI model so that GBTx can be controlled remotely. A Python script is used at computer side to drive IPbus controller.

  6. Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Bianco, M.; Martoiu, S.; Sidiropoulou, O.; Zibell, A.

    2015-12-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Software, a dedicated Micromegas segment has been implemented, in order to include the detector inside the main ATLAS DAQ partition. A full set of tests, on the hardware and software aspects, is presented.

  7. Automatic control of a negative ion source

    NASA Astrophysics Data System (ADS)

    Saadatmand, K.; Sredniawski, J.; Solensten, L.

    1989-04-01

    A CAMAC based control architecture is devised for a Berkeley-type H - volume ion source [1]. The architecture employs three 80386 TM PCs. One PC is dedicated to control and monitoring of source operation. The other PC functions with digitizers to provide data acquisition of waveforms. The third PC is used for off-line analysis. Initially, operation of the source was put under remote computer control (supervisory). This was followed by development of an automated startup procedure. Finally, a study of the physics of operation is now underway to establish a data base from which automatic beam optimization can be derived.

  8. A balloon-borne high-resolution spectrometer for observations of gamma-ray emission from solar flares

    NASA Technical Reports Server (NTRS)

    Crannell, C. J.; Starr, R.; Stottlemyre, A. R.; Trombka, J. I.

    1984-01-01

    The design, development, and balloon-flight verification of a payload for observations of gamma-ray emission from solar flares are reported. The payload incorporates a high-purity germanium semiconductor detector, standard NIM and CAMAC electronics modules, a thermally stabilized pressure housing, and regulated battery power supplies. The flight system is supported on the ground with interactive data-handling equipment comprised of similar electronics hardware. The modularity and flexibility of the payload, together with the resolution and stability obtained throughout a 30-hour flight, make it readily adaptable for high-sensitivity, long-duration balloon fight applications.

  9. A UNIX-based real-time data acquisition system for microprobe analysis using an advanced X11 window toolkit

    NASA Astrophysics Data System (ADS)

    Kramer, J. L. A. M.; Ullings, A. H.; Vis, R. D.

    1993-05-01

    A real-time data acquisition system for microprobe analysis has been developed at the Free University of Amsterdam. The system is composed of two parts: a front-end real-time and a back-end monitoring system. The front-end consists of a VMEbus based system which reads out a CAMAC crate. The back-end is implemented on a Sun work station running the UNIX operating system. This separation allows the integration of a minimal, and consequently very fast, real-time executive within the sophisticated possibilities of advanced UNIX work stations.

  10. A multi-channel coronal spectrophotometer.

    NASA Technical Reports Server (NTRS)

    Landman, D. A.; Orrall, F. Q.; Zane, R.

    1973-01-01

    We describe a new multi-channel coronal spectrophotometer system, presently being installed at Mees Solar Observatory, Mount Haleakala, Maui. The apparatus is designed to record and interpret intensities from many sections of the visible and near-visible spectral regions simultaneously, with relatively high spatial and temporal resolution. The detector, a thermoelectrically cooled silicon vidicon camera tube, has its central target area divided into a rectangular array of about 100,000 pixels and is read out in a slow-scan (about 2 sec/frame) mode. Instrument functioning is entirely under PDP 11/45 computer control, and interfacing is via the CAMAC system.

  11. Data acquisition using the 168/E. [CERN ISR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, J.T.; Cittolin, S.; Demoulin, M.

    1983-03-01

    Event sizes and data rates at the CERN anti p p collider compose a formidable environment for a high level trigger. A system using three 168/E processors for experiment UA1 real-time event selection is described. With 168/E data memory expanded to 512K bytes, each processor holds a complete event allowing a FORTRAN trigger algorithm access to data from the entire detector. A smart CAMAC interface reads five Remus branches in parallel transferring one word to the target processor every 0.5 ..mu..s. The NORD host computer can simultaneously read an accepted event from another processor.

  12. Scheduling periodic jobs that allow imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1990-01-01

    The problem of scheduling periodic jobs in hard real-time systems that support imprecise computations is discussed. Two workload models of imprecise computations are presented. These models differ from traditional models in that a task may be terminated any time after it has produced an acceptable result. Each task is logically decomposed into a mandatory part followed by an optional part. In a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part refines the result produced by the mandatory part to reduce the error in the result. Applications are classified as type N and type C, according to undesirable effects of errors. The two workload models characterize the two types of applications. The optional parts of the tasks in an N job need not ever be completed. The resulting quality of each type-N job is measured in terms of the average error in the results over several consecutive periods. A class of preemptive, priority-driven algorithms that leads to feasible schedules with small average error is described and evaluated.

  13. Cardiovascular disease testing on the Dimension Vista system: biomarkers of acute coronary syndromes.

    PubMed

    Kelley, Walter E; Lockwood, Christina M; Cervelli, Denise R; Sterner, Jamie; Scott, Mitchell G; Duh, Show-Hong; Christenson, Robert H

    2009-09-01

    Performance characteristics of the LOCI cTnI, CK-MB, MYO, NTproBNP and hsCRP methods on the Dimension Vista System were evaluated. Imprecision (following CLSI EP05-A2 guidelines), limit of quantitation (cTnI), limit of blank, linearity on dilution, serum versus plasma matrix studies (cTnI), and method comparison studies were conducted. Method imprecision of 1.8 to 9.7% (cTnI), 1.8 to 5.7% (CK-MB), 2.1 to 2.2% (MYO), 1.6 to 3.3% (NTproBNP), and 3.5 to 4.2% (hsCRP) were demonstrated. The manufacturer's claimed imprecision, detection limits and upper measurement limits were met. Limit of Quantitation was 0.040 ng/mL for the cTnI assay. Agreement of serum and plasma values for cTnI (r=0.99) was shown. Method comparison study results were acceptable. The Dimension Vista cTnI, CK-MB, MYO, NTproBNP, and hsCRP methods demonstrate acceptable performance characteristics for use as an aid in the diagnosis and risk assessment of patients presenting with suspected acute coronary syndromes.

  14. Topics in inference and decision-making with partial knowledge

    NASA Technical Reports Server (NTRS)

    Safavian, S. Rasoul; Landgrebe, David

    1990-01-01

    Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.

  15. Parallel computers - Estimate errors caused by imprecise data

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne

    1991-01-01

    A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.

  16. In-beam experience with a highly granular DAQ and control network: TrbNet

    NASA Astrophysics Data System (ADS)

    Michel, J.; Korcyl, G.; Maier, L.; Traxler, M.

    2013-02-01

    Virtually all Data Acquisition Systems (DAQ) for nuclear and particle physics experiments use a large number of Field Programmable Gate Arrays (FPGAs) for data transport and more complex tasks as pattern recognition and data reduction. All these FPGAs in a large system have to share a common state like a trigger number or an epoch counter to keep the system synchronized for a consistent event/epoch building. Additionally, the collected data has to be transported with high bandwidth, optionally via the ubiquitous Ethernet protocol. Furthermore, the FPGAs' internal states and configuration memories have to be accessed for control and monitoring purposes. Another requirement for a modern DAQ-network is the fault-tolerance for intermittent data errors in the form of automatic retransmission of faulty data. As FPGAs suffer from Single Event Effects when exposed to ionizing particles, the system has to deal with failing FPGAs. The TrbNet protocol was developed taking all these requirements into account. Three virtual channels are merged on one physical medium: The trigger/epoch information is transported with the highest priority. The data channel is second in the priority order, while the control channel is the last. Combined with a small frame size of 80 bit this guarantees a low latency data transport: A system with 100 front-ends can be built with a one-way latency of 2.2 us. The TrbNet-protocol was implemented in each of the 550 FPGAs of the HADES upgrade project and has been successfully used during the Au+Au campaign in April 2012. With 2ṡ106/s Au-ions and 3% interaction ratio the accepted trigger rate is 10 kHz while data is written to storage with 150 MBytes/s. Errors are reliably mitigated via the implemented retransmission of packets and auto-shut-down of individual links. TrbNet was also used for full monitoring of the FEE status. The network stack is written in VHDL and was successfully deployed on various Lattice and Xilinx devices. The TrbNet is also used in other experiments, like systems for detector and electronics development for PANDA and CBM at FAIR. As a platform for such set-ups, e.g. for high-channel time measurement with 15 ps resolution, a generic FPGA platform (TRB3) has been developed.

  17. Determination of serum calcium levels by 42Ca isotope dilution inductively coupled plasma mass spectrometry.

    PubMed

    Han, Bingqing; Ge, Menglei; Zhao, Haijian; Yan, Ying; Zeng, Jie; Zhang, Tianjiao; Zhou, Weiyan; Zhang, Jiangtao; Wang, Jing; Zhang, Chuanbao

    2017-11-27

    Serum calcium level is an important clinical index that reflects pathophysiological states. However, detection accuracy in laboratory tests is not ideal; as such, a high accuracy method is needed. We developed a reference method for measuring serum calcium levels by isotope dilution inductively coupled plasma mass spectrometry (ID ICP-MS), using 42Ca as the enriched isotope. Serum was digested with 69% ultrapure nitric acid and diluted to a suitable concentration. The 44Ca/42Ca ratio was detected in H2 mode; spike concentration was calibrated by reverse IDMS using standard reference material (SRM) 3109a, and sample concentration was measured by a bracketing procedure. We compared the performance of ID ICP-MS with those of three other reference methods in China using the same serum and aqueous samples. The relative expanded uncertainty of the sample concentration was 0.414% (k=2). The range of repeatability (within-run imprecision), intermediate imprecision (between-run imprecision), and intra-laboratory imprecision were 0.12%-0.19%, 0.07%-0.09%, and 0.16%-0.17%, respectively, for two of the serum samples. SRM909bI, SRM909bII, SRM909c, and GBW09152 were found to be within the certified value interval, with mean relative bias values of 0.29%, -0.02%, 0.10%, and -0.19%, respectively. The range of recovery was 99.87%-100.37%. Results obtained by ID ICP-MS showed a better accuracy than and were highly correlated with those of other reference methods. ID ICP-MS is a simple and accurate candidate reference method for serum calcium measurement and can be used to establish and improve serum calcium reference system in China.

  18. Evaluation of a next generation direct whole blood enzymatic assay for hemoglobin A1c on the ARCHITECT c8000 chemistry system.

    PubMed

    Teodoro-Morrison, Tracy; Janssen, Marcel J W; Mols, Jasper; Hendrickx, Ben H E; Velmans, Mathieu H; Lotz, Johannes; Lackner, Karl; Lennartz, Lieselotte; Armbruster, David; Maine, Gregory; Yip, Paul M

    2015-01-01

    The utility of HbA1c for the diagnosis of type 2 diabetes requires an accurate, precise and robust test measurement system. Currently, immunoassay and HPLC are the most popular methods for HbA1c quantification, noting however the limitations associated with some platforms, such as imprecision or interference from common hemoglobin variants. Abbott Diagnostics has introduced a fully automated direct enzymatic method for the quantification of HbA1c from whole blood on the ARCHITECT chemistry system. Here we completed a method evaluation of the ARCHITECT HbA1c enzymatic assay for imprecision, accuracy, method comparison, interference from hemoglobin variants and specimen stability. This was completed at three independent clinical laboratories in North America and Europe. The total imprecision ranged from 0.5% to 2.2% CV with low and high level control materials. Around the diagnostic cut-off of 48 mmol/mol, the total imprecision was 0.6% CV. Mean bias using reference samples from IFCC and CAP ranged from -1.1 to 1.0 mmol/mol. The enzymatic assay also showed excellent agreement with HPLC methods, with slopes of 1.01 and correlation coefficients ranging from 0.984 to 0.996 compared to Menarini Adams HA-8160, Bio-Rad Variant II and Variant II Turbo instruments. Finally, no significant effect was observed for erythrocyte sedimentation or interference from common hemoglobin variants in patient samples containing heterozygous HbS, HbC, HbD, HbE, and up to 10% HbF. The ARCHITECT enzymatic assay for HbA1c is a robust and fully automated method that meets the performance requirements to support the diagnosis of type 2 diabetes.

  19. Importance of the efficiency of double-stranded DNA formation in cDNA synthesis for the imprecision of microarray expression analysis.

    PubMed

    Thormar, Hans G; Gudmundsson, Bjarki; Eiriksdottir, Freyja; Kil, Siyoen; Gunnarsson, Gudmundur H; Magnusson, Magnus Karl; Hsu, Jason C; Jonsson, Jon J

    2013-04-01

    The causes of imprecision in microarray expression analysis are poorly understood, limiting the use of this technology in molecular diagnostics. Two-dimensional strandness-dependent electrophoresis (2D-SDE) separates nucleic acid molecules on the basis of length and strandness, i.e., double-stranded DNA (dsDNA), single-stranded DNA (ssDNA), and RNA·DNA hybrids. We used 2D-SDE to measure the efficiency of cDNA synthesis and its importance for the imprecision of an in vitro transcription-based microarray expression analysis. The relative amount of double-stranded cDNA formed in replicate experiments that used the same RNA sample template was highly variable, ranging between 0% and 72% of the total DNA. Microarray experiments showed an inverse relationship between the difference between sample pairs in probe variance and the relative amount of dsDNA. Approximately 15% of probes showed between-sample variation (P < 0.05) when the dsDNA percentage was between 12% and 35%. In contrast, only 3% of probes showed between-sample variation when the dsDNA percentage was 69% and 72%. Replication experiments of the 35% dsDNA and 72% dsDNA samples were used to separate sample variation from probe replication variation. The estimated SD of the sample-to-sample variation and of the probe replicates was lower in 72% dsDNA samples than in 35% dsDNA samples. Variation in the relative amount of double-stranded cDNA synthesized can be an important component of the imprecision in T7 RNA polymerase-based microarray expression analysis. © 2013 American Association for Clinical Chemistry

  20. Clinical phenotype-based gene prioritization: an initial study using semantic similarity and the human phenotype ontology.

    PubMed

    Masino, Aaron J; Dechene, Elizabeth T; Dulik, Matthew C; Wilkens, Alisha; Spinner, Nancy B; Krantz, Ian D; Pennington, Jeffrey W; Robinson, Peter N; White, Peter S

    2014-07-21

    Exome sequencing is a promising method for diagnosing patients with a complex phenotype. However, variant interpretation relative to patient phenotype can be challenging in some scenarios, particularly clinical assessment of rare complex phenotypes. Each patient's sequence reveals many possibly damaging variants that must be individually assessed to establish clear association with patient phenotype. To assist interpretation, we implemented an algorithm that ranks a given set of genes relative to patient phenotype. The algorithm orders genes by the semantic similarity computed between phenotypic descriptors associated with each gene and those describing the patient. Phenotypic descriptor terms are taken from the Human Phenotype Ontology (HPO) and semantic similarity is derived from each term's information content. Model validation was performed via simulation and with clinical data. We simulated 33 Mendelian diseases with 100 patients per disease. We modeled clinical conditions by adding noise and imprecision, i.e. phenotypic terms unrelated to the disease and terms less specific than the actual disease terms. We ranked the causative gene against all 2488 HPO annotated genes. The median causative gene rank was 1 for the optimal and noise cases, 12 for the imprecision case, and 60 for the imprecision with noise case. Additionally, we examined a clinical cohort of subjects with hearing impairment. The disease gene median rank was 22. However, when also considering the patient's exome data and filtering non-exomic and common variants, the median rank improved to 3. Semantic similarity can rank a causative gene highly within a gene list relative to patient phenotype characteristics, provided that imprecision is mitigated. The clinical case results suggest that phenotype rank combined with variant analysis provides significant improvement over the individual approaches. We expect that this combined prioritization approach may increase accuracy and decrease effort for clinical genetic diagnosis.

  1. Effect of defuzzification method of fuzzy modeling

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.

  2. Experimental Magnetohydrodynamic Energy Extraction from a Pulsed Detonation

    DTIC Science & Technology

    2015-03-01

    experimental data taken in this thesis will follow voltage profiles similar to Fig. 2. Notice the initial section in Fig. 2 shows exponential decay consistent...equal that time constant. The exponential curves in Fig. 2 show how changing the time constant can change the charge and/or discharge rate of the...see Fig. 1), at a sampling rate of 1 MHz. Shielded wire and a common ground were used throughout the DAQ system to avoid capacitive issues in the

  3. Man-Portable Vector EMI Sensor for Full UXO Characterization

    DTIC Science & Technology

    2012-03-01

    receivers (for survey in forest and/or in steep terrain). Left inset shows data acquisition (DAQ) and power unit mounted on a backpack frame. Right panel...survey list was created such as to minimize the overall travel distance to visit every anomaly. After the daily IVS survey field operators walked to...the red star at coordinates (0, 0), is generally offset from the signal peak. This observation motivated use of a conservative 3x3-point-grid survey

  4. 500 MHz narrowband beam position monitor electronics for electron synchrotrons

    NASA Astrophysics Data System (ADS)

    Mohos, I.; Dietrich, J.

    1998-12-01

    Narrowband beam position monitor electronics were developed in the Forschungszentrum Jülich-IKP for the orbit measurement equipment used at ELSA Bonn. The equipment uses 32 monitor chambers, each with four capacitive button electrodes. The monitor electronics, consisting of an rf signal processing module (BPM-RF) and a data acquisition and control module (BPM-DAQ), sequentially process and measure the monitor signals and deliver calculated horizontal and vertical beam position data via a serial network.

  5. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells.

    PubMed

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-03-09

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.

  6. Designing the detection system for the CORUS project

    NASA Astrophysics Data System (ADS)

    Kalogirou, A.

    2013-05-01

    CORUS (Cosmic Rays in UK Schools) will be a network of muon detectors based in schools across the UK. Networks similar to CORUS already exist in other countries, such as the Netherlands and USA. The main aim of the project is to teach high schools students about cosmic rays and experimental physics as well as to motivate them to pursue studies in science. A set of muon detectors will be used for this purpose and the objective of this study is to complete the design of the detectors, construct them and test their capabilities and limitations. The most important component of the muon detector is the electronic card used to collect, analyse and output data. A DAQ card used by QuarkNet, a network of detectors in the USA, has been used in the design of the CORUS detectors. Some readily available photomultiplier tubes have also been used, along with an interface board which connects them to the DAQ board. In this study, I tested whether these two components work well together by conducting a series of experiments, intended to be performed by the students, with the nal detector set-up. The end result is that although a number of improvements is needed before the detectors serve their purpose, this particular set-up does not impose any limitations to the experiments that it is intended to be used for.

  7. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    DOE PAGES

    Claus, R.

    2015-10-23

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQmore » building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. Furthermore, the full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.« less

  8. The TOTEM DAQ based on the Scalable Readout System (SRS)

    NASA Astrophysics Data System (ADS)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  9. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    PubMed Central

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  10. Using an ARM Processor to boost data acquisition rates

    NASA Astrophysics Data System (ADS)

    Brown, Anthony; Seaquest Collaboration

    2015-10-01

    It has been proposed, Fermilab E-1067, to use the SeaQuest (E906/E1039/1037) dimuon spectrometer to do a search for the dark photon and dark Higgs. The concept is that it would run in a parasitic mode with only minor upgrades to the spectrometer. There are various requirements for the upgrades but one of them is to increase the DAQ rates and one minimal cost approach to do this will be discussed. The currently running SeaQuest (E906) experiment has modest rate requirements of around 1 kHz. Since the dark particle search would involve recording particles originating in the first magnet used as a beam dump, the data rate will be higher than recording events just from the target. Thus the DAQ rate capability will need to be increased to around 10 kHz. There exists a possible very low cost solution as the Academica Sinica designed TDCs contains an ARM processor that was not needed to meet the original SeaQuest (E906 needs). Since the 120 GeV beam from the Main Injector is delivered in a 4 second spill, once per minute and the ARM processor on the TDC has two dual-ported memory chips, these could be used to store data during each spill and then read the data out in the time between spills.

  11. Modular Integrated Stackable Layers (MISL) 1.1 Design Specification. Design Guideline Document

    NASA Technical Reports Server (NTRS)

    Yim, Hester J.

    2012-01-01

    This document establishes the design guideline of the Modular Instrumentation Data Acquisition (MI-DAQ) system in utilization of several designs available in EV. The MI- DAQ provides the options to the customers depending on their system requirements i.e. a 28V interface power supply, a low power battery operated system, a low power microcontroller, a higher performance microcontroller, a USB interface, a Ethernet interface, a wireless communication, various sensor interfaces, etc. Depending on customer's requirements, the each functional board can be stacked up from a bottom level of power supply to a higher level of stack to provide user interfaces. The stack up of boards are accomplished by a predefined and standardized power bus and data bus connections which are included in this document along with other physical and electrical guidelines. This guideline also provides information for a new design options. This specification is the product of a collaboration between NASA/JSC/EV and Texas A&M University. The goal of the collaboration is to open source the specification and allow outside entities to design, build, and market modules that are compatible with the specification. NASA has designed and is using numerous modules that are compatible to this specification. A limited number of these modules will also be released as open source designs to support the collaboration. The released designs are listed in the Applicable Documents.

  12. The Common Data Acquisition Platform in the Helmholtz Association

    NASA Astrophysics Data System (ADS)

    Kaever, P.; Balzer, M.; Kopmann, A.; Zimmer, M.; Rongen, H.

    2017-04-01

    Various centres of the German Helmholtz Association (HGF) started in 2012 to develop a modular data acquisition (DAQ) platform, covering the entire range from detector readout to data transfer into parallel computing environments. This platform integrates generic hardware components like the multi-purpose HGF-Advanced Mezzanine Card or a smart scientific camera framework, adding user value with Linux drivers and board support packages. Technically the scope comprises the DAQ-chain from FPGA-modules to computing servers, notably frontend-electronics-interfaces, microcontrollers and GPUs with their software plus high-performance data transmission links. The core idea is a generic and component-based approach, enabling the implementation of specific experiment requirements with low effort. This so called DTS-platform will support standards like MTCA.4 in hard- and software to ensure compatibility with commercial components. Its capability to deploy on other crate standards or FPGA-boards with PCI express or Ethernet interfaces remains an essential feature. Competences of the participating centres are coordinated in order to provide a solid technological basis for both research topics in the Helmholtz Programme ``Matter and Technology'': ``Detector Technology and Systems'' and ``Accelerator Research and Development''. The DTS-platform aims at reducing costs and development time and will ensure access to latest technologies for the collaboration. Due to its flexible approach, it has the potential to be applied in other scientific programs.

  13. First upper limits on the radar cross section of cosmic-ray induced extensive air showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbasi, R. U.; Abe, M.; Abou Bakr Othman, M.

    TARA (Telescope Array Radar) is a cosmic ray radar detection experiment colocated with Telescope Array, the conventional surface scintillation detector (SD) and fluorescence telescope detector (FD) near Delta, Utah, U.S.A. Furthermore, the TARA detector combines a 40 kW, 54.1 MHz VHF transmitter and high-gain transmitting antenna which broadcasts the radar carrier over the SD array and within the FD field of view, towards a 250 MS/s DAQ receiver. TARA has been collecting data since 2013 with the primary goal of observing the radar signatures of extensive air showers (EAS). Simulations indicate that echoes are expected to be short in durationmore » (~10 µs) and exhibit rapidly changing frequency, with rates on the order 1 MHz/µs. The EAS radar cross-section (RCS) is currently unknown although it is the subject of over 70 years of speculation. One novel signal search technique is described in which the expected radar echo of a particular air shower is used as a matched filter template and compared to waveforms obtained by triggering the radar DAQ using the Telescope Array fluorescence detector. No evidence for the scattering of radio frequency radiation by EAS is obtained to date. Finally, we report the first quantitative RCS upper limits using EAS that triggered the Telescope Array Fluorescence Detector.« less

  14. First upper limits on the radar cross section of cosmic-ray induced extensive air showers

    DOE PAGES

    Abbasi, R. U.; Abe, M.; Abou Bakr Othman, M.; ...

    2016-11-19

    TARA (Telescope Array Radar) is a cosmic ray radar detection experiment colocated with Telescope Array, the conventional surface scintillation detector (SD) and fluorescence telescope detector (FD) near Delta, Utah, U.S.A. Furthermore, the TARA detector combines a 40 kW, 54.1 MHz VHF transmitter and high-gain transmitting antenna which broadcasts the radar carrier over the SD array and within the FD field of view, towards a 250 MS/s DAQ receiver. TARA has been collecting data since 2013 with the primary goal of observing the radar signatures of extensive air showers (EAS). Simulations indicate that echoes are expected to be short in durationmore » (~10 µs) and exhibit rapidly changing frequency, with rates on the order 1 MHz/µs. The EAS radar cross-section (RCS) is currently unknown although it is the subject of over 70 years of speculation. One novel signal search technique is described in which the expected radar echo of a particular air shower is used as a matched filter template and compared to waveforms obtained by triggering the radar DAQ using the Telescope Array fluorescence detector. No evidence for the scattering of radio frequency radiation by EAS is obtained to date. Finally, we report the first quantitative RCS upper limits using EAS that triggered the Telescope Array Fluorescence Detector.« less

  15. CAMAC driver for the RSX-11M V3 operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tippie, J. W.; Cannon, P. H.

    1977-01-01

    A driver for Kinetic Systems 3911A dedicated crate controller and 3992 serial highway driver for RSX-11M is described. The implementation includes a modified UCB structure. With this structure, multiple active I/O requests are supported to a single controller. The completion of an I/O request may be tied to external events via a WAIT-FOR-LAM command. Features of the driver include the ability to pass a list of FNA's in a single QIO call, serial highway overhead transparent at the QIO level, and special control commands to the driver passed in the FNA list. 1 figure.

  16. Trigger and data acquisition system for the N- N experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldo-Ceolin, M.; Bobisut, F.; Bonaiti, V.

    1991-04-01

    In this paper the Trigger and Data Acquisition system of the N-{bar N} experiment at the Institute Laue-Langevin at Grenoble is presented, together with CAMAC modules especially designed for this experiment. The trigger system is organized on three logical levels; it works in the presence of a high level of beam induced noise, without beam pulse synchronization, looking for a very rare signal. The data acquisition is based on a MicroVax II computer, in a cluster with 4 VaxStations, the DAQP software developed at CERN. The system has been working for a year with high efficiency and reliability.

  17. Fermilab`s DART DA system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pordes, R.; Anderson, J.; Berg, D.

    1994-04-01

    DART is the new data acquisition system designed and implemented for six Fermilab experiments by the Fermilab Computing Division and the experiments themselves. The complexity of the experiments varies greatly. Their data taking throughput and event filtering requirements range from a few (2-5) to tens (80) of CAMAC, FASTBUS and home built front end crates; from a few 100 KByte/sec to 160 MByte/sec front end data collection rates; and from 0-3000 Mips of level 3 processing. The authors report on the architecture and implementation of DART to this date, and the hardware and software components that are being developed andmore » supported.« less

  18. Fermilab`s DART DA system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pordes, R.; Anderson, J.; Berg, D.

    1994-12-31

    DART is the new data acquisition system designed and implemented for six Fermilab experiments by the Fermilab Computing Division and the experiments themselves. The complexity of the experiments varies greatly. Their data taking throughput and event filtering requirements range from a few (2-5) to tens (80) of CAMAC, FASTBUS and home built front end crates; from a few 100 KByte/sec to 160 MByte/sec front end data collection rates; and from 0-3000 Mips of level 3 processing. The authors report on the architecture and implementation of DART to this data, and the hardware and software components that are being developed andmore » supported.« less

  19. Imprecise intron losses are less frequent than precise intron losses but are not rare in plants.

    PubMed

    Ma, Ming-Yue; Zhu, Tao; Li, Xue-Nan; Lan, Xin-Ran; Liu, Heng-Yuan; Yang, Yu-Fei; Niu, Deng-Ke

    2015-05-27

    In this study, we identified 19 intron losses, including 11 precise intron losses (PILs), six imprecise intron losses (IILs), one de-exonization, and one exon deletion in tomato and potato, and 17 IILs in Arabidopsis thaliana. Comparative analysis of related genomes confirmed that all of the IILs have been fixed during evolution. Consistent with previous studies, our results indicate that PILs are a major type of intron loss. However, at least in plants, IILs are unlikely to be as rare as previously reported. This article was reviewed by Jun Yu and Zhang Zhang. For complete reviews, see the Reviewers' Reports section.

  20. Adaptive hybrid optimal quantum control for imprecisely characterized systems.

    PubMed

    Egger, D J; Wilhelm, F K

    2014-06-20

    Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.

  1. The proposed terminology 'A(1c)-derived average glucose' is inherently imprecise and should not be adopted.

    PubMed

    Bloomgarden, Z T; Inzucchi, S E; Karnieli, E; Le Roith, D

    2008-07-01

    The proposed use of a more precise standard for glycated (A(1c)) and non-glycated haemoglobin would lead to an A(1c) value, when expressed as a percentage, that is lower than that currently in use. One approach advocated to address the potential confusion that would ensue is to replace 'HbA(1c)' with a new term, 'A(1c)-derived average glucose.' We review evidence from several sources suggesting that A(1c) is, in fact, inherently imprecise as a measure of average glucose, so that the proposed terminology should not be adopted.

  2. The Reliability Estimation for the Open Function of Cabin Door Affected by the Imprecise Judgment Corresponding to Distribution Hypothesis

    NASA Astrophysics Data System (ADS)

    Yu, Z. P.; Yue, Z. F.; Liu, W.

    2018-05-01

    With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.

  3. Application of fuzzy set and Dempster-Shafer theory to organic geochemistry interpretation

    NASA Technical Reports Server (NTRS)

    Kim, C. S.; Isaksen, G. H.

    1993-01-01

    An application of fuzzy sets and Dempster Shafter Theory (DST) in modeling the interpretational process of organic geochemistry data for predicting the level of maturities of oil and source rock samples is presented. This was accomplished by (1) representing linguistic imprecision and imprecision associated with experience by a fuzzy set theory, (2) capturing the probabilistic nature of imperfect evidences by a DST, and (3) combining multiple evidences by utilizing John Yen's generalized Dempster-Shafter Theory (GDST), which allows DST to deal with fuzzy information. The current prototype provides collective beliefs on the predicted levels of maturity by combining multiple evidences through GDST's rule of combination.

  4. Reducing preference reversals: The role of preference imprecision and nontransparent methods.

    PubMed

    Pinto-Prades, José Luis; Sánchez-Martínez, Fernando Ignacio; Abellán-Perpiñán, José María; Martínez-Pérez, Jorge E

    2018-05-16

    Preferences elicited with matching and choice usually diverge (as characterised by preference reversals), violating a basic rationality requirement, namely, procedure invariance. We report the results of an experiment that shows that preference reversals between matching (Standard Gamble in our case) and choice are reduced when the matching task is conducted using nontransparent methods. Our results suggest that techniques based on nontransparent methods are less influenced by biases (i.e., compatibility effects) than transparent methods. We also observe that imprecision of preferences influences the degree of preference reversals. The preference reversal phenomenon is less strong in subjects with more precise preferences. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Improving the estimation of flavonoid intake for study of health outcomes

    PubMed Central

    Dwyer, Johanna T.; Jacques, Paul F.; McCullough, Marjorie L.

    2015-01-01

    Imprecision in estimating intakes of non-nutrient bioactive compounds such as flavonoids is a challenge in epidemiologic studies of health outcomes. The sources of this imprecision, using flavonoids as an example, include the variability of bioactive compounds in foods due to differences in growing conditions and processing, the challenges in laboratory quantification of flavonoids in foods, the incompleteness of flavonoid food composition tables, and the lack of adequate dietary assessment instruments. Steps to improve databases of bioactive compounds and to increase the accuracy and precision of the estimation of bioactive compound intakes in studies of health benefits and outcomes are suggested. PMID:26084477

  6. Do new concepts for deriving permissible limits for analytical imprecision and bias have any advantages over existing consensus?

    PubMed

    Petersen, Per Hyltoft; Sandberg, Sverre; Fraser, Callum G

    2011-04-01

    The Stockholm conference held in 1999 on "Strategies to set global analytical quality specifications (AQS) in laboratory medicine" reached a consensus and advocated the ubiquitous application of a hierarchical structure of approaches to setting AQS. This approach has been widely used over the last decade, although several issues remain unanswered. A number of new suggestions have been recently proposed for setting AQS. One of these recommendations is described by Haeckel and Wosniok in this issue of Clinical Chemistry and Laboratory Medicine. Their concept is to estimate the increase in false-positive results using conventional population-based reference intervals, the delta false-positive rate due to analytical imprecision and bias, and relate the results directly to the current analytical quality attained. Thus, the actual estimates in the laboratory for imprecision and bias are compared to the AQS. These values are classified in a ranking system according to the closeness to the AQS, and this combination is the new idea of the proposal. Other new ideas have been proposed recently. We wait, with great interest, as should others, to see if these newer approaches become widely used and worthy of incorporation into the hierarchy.

  7. The Design of a Chemical Virtual Instrument Based on LabVIEW for Determining Temperatures and Pressures.

    PubMed

    Wang, Wen-Bin; Li, Jang-Yuan; Wu, Qi-Jun

    2007-01-01

    A LabVIEW-based self-constructed chemical virtual instrument (VI) has been developed for determining temperatures and pressures. It can be put together easily and quickly by selecting hardware modules, such as the PCI-DAQ card or serial port method, different kinds of sensors, signal-conditioning circuits or finished chemical instruments, and software modules such as data acquisition, saving, proceeding. The VI system provides individual and extremely flexible solutions for automatic measurements in physical chemistry research.

  8. The Design of a Chemical Virtual Instrument Based on LabVIEW for Determining Temperatures and Pressures

    PubMed Central

    Wang, Wen-Bin; Li, Jang-Yuan; Wu, Qi-Jun

    2007-01-01

    A LabVIEW-based self-constructed chemical virtual instrument (VI) has been developed for determining temperatures and pressures. It can be put together easily and quickly by selecting hardware modules, such as the PCI-DAQ card or serial port method, different kinds of sensors, signal-conditioning circuits or finished chemical instruments, and software modules such as data acquisition, saving, proceeding. The VI system provides individual and extremely flexible solutions for automatic measurements in physical chemistry research. PMID:17671611

  9. Understanding the Physiological, Biomechanical, and Performance Effects of Body Armor Use

    DTIC Science & Technology

    2008-12-01

    force plates were collected through a single data acquisition (DAQ) system and were time-synchronized. 2.1 Testing Equipment Figure 1. Examples of 3...For analysis purposes, it was scaled to the volunteer’s body mass (ml/kg/min). For walking trials, the force plate treadmill was set at a speed of...familiarized with walking and running on the force plate treadmill at these speeds. For familiarization, a volunteer first walked at 1.34 mls without any

  10. Frequency analysis of uncertain structures using imprecise probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modares, Mehdi; Bergerson, Joshua

    2015-01-01

    Two new methods for finite element based frequency analysis of a structure with uncertainty are developed. An imprecise probability formulation based on enveloping p-boxes is used to quantify the uncertainty present in the mechanical characteristics of the structure. For each element, independent variations are considered. Using the two developed methods, P-box Frequency Analysis (PFA) and Interval Monte-Carlo Frequency Analysis (IMFA), sharp bounds on natural circular frequencies at different probability levels are obtained. These methods establish a framework for handling incomplete information in structural dynamics. Numerical example problems are presented that illustrate the capabilities of the new methods along with discussionsmore » on their computational efficiency.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, S.H.; Oxoby, G.J.; Trang, Q.H.

    The advent of the personal microcomputer provides a new tool for the debugging, calibration and monitoring of small scale physics apparatus; e.g., a single detector being developed for a larger physics apparatus. With an appropriate interface these microcomputer systems provide a low cost (1/3 the cost of a comparable minicomputer system), convenient, dedicated, portable system which can be used in a fashion similar to that of portable oscilloscopes. Here we describe an interface between the Apple computer and CAMAC which is now being used to study the detector for a Cerenkov ring-imaging device. The Apple is particularly well-suited to thismore » application because of its ease of use, hi-resolution graphics peripheral buss and documentation support.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxoby, G.J.; Trang, Q.H.; Williams, S.H.

    The advent of the personal microcomputer provides a new tool for the debugging, calibration and monitoring of small scale physics apparatus, e.g., a single detector being developed for a larger physics apparatus. With an appropriate interface these microcomputer systems provide a low cost (1/3 the cost of a comparable minicomputer system), convenient, dedicated, portable system which can be used in a fashion similar to that of portable oscilloscopes. Here, an interface between the Apple computer and CAMAC which is now being used to study the detector for a Cerenkov ring-imaging device is described. The Apple is particularly well-suited to thismore » application because of its ease of use, hi-resolution graphics, peripheral bus and documentation support.« less

  13. Software handlers for process interfaces

    NASA Technical Reports Server (NTRS)

    Bercaw, R. W.

    1976-01-01

    The principles involved in the development of software handlers for custom interfacing problems are discussed. Handlers for the CAMAC standard are examined in detail. The types of transactions that must be supported have been established by standards groups, eliminating conflicting requirements arising out of different design philosophies and applications. Implementation of the standard handlers has been facilititated by standardization of hardware. The necessary local processors can be placed in the handler when it is written or at run time by means of input/output directives, or they can be built into a high-performance input/output processor. The full benefits of these process interfaces will only be realized when software requirements are incorporated uniformly into the hardware.

  14. SYRMEP front-end and read-out electronics

    NASA Astrophysics Data System (ADS)

    Arfelli, F.; Bonvicini, V.; Bravin, A.; Cantatore, G.; Castelli, E.; Cristaudo, P.; Di Michiel, M.; Longo, R.; Olivo, A.; Pani, S.; Pontoni, D.; Poropat, P.; Prest, M.; Rashevsky, A.; Tomasini, F.; Tromba, G.; Vacchi, A.; Vallazza, E.

    1998-02-01

    The SYRMEP approach to digital mammography implies the use of a monochromatic X-ray beam from a synchrotron source and a slot of superimposed silicon microstrip detectors as a scanning image receptor. The microstrips are read by 32-channel chips mounted on 7-layer hybrid circuits which receive control signals and operating voltages from a MASTER-SLAVE configuration of cards. The MASTER card is driven by the CIRM, a dedicated CAMAC module whose timing function can be easily excluded to obtain data-storage-only units connected to different MASTERs: this second-level modular expansion capability fully achieves the tasks of an electronics system able to follow the SYRMEP detector growth till the final size of seven thousands of channels.

  15. "Data Acquisition Systems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unterweger, Michael; Costrell, Louis deceased

    2009-07-07

    This project involved support for Lou Costrell and myself in the development of IEEE and IEC standards for nuclear counting and data acquisition systems. Over the years, as a result of this support, Lou and I were able to attend standards meetings of IEEE and IEC, which led directly to the publication of many standards for NIM systems, FastBus and CAMAC. We also chaired several writing committees as well as ANSI N42 (Nuclear instrumentation), IEEE NIM (NIM standard), IEEE NID (NPSS nuclear instruments and detector) and IEC TC45 WG9 (Nuclear instrumentation). Through this support we were able to assure thatmore » the interests of the US and DOE were expressed and implemented in the various standards.« less

  16. The KLOE-2 high energy taggers

    NASA Astrophysics Data System (ADS)

    Curciarello, F.

    2017-06-01

    The precision measurement of the π0 → γγ width allows to gain insights into the low-energy QCD dynamics. A way to achieve the precision needed (1%) in order to test theory predictions is to study the π0 production through γγ fusion in the e+e- → e+e-γ*γ* → e+e-π0 reaction. The KLOE-2 experiment, currently running at the DAΦNE facility in Frascati, aims to perform this measurement. For this reason, new detectors, which allow to tag final state leptons, have been installed along the DAΦNE beam line in order to reduce the background coming from phi-meson decays. The High Energy Tagger (HET) detector measures the deviation of leptons from their main orbit by determining their position and timing. The HET detectors are placed in roman pots just at the exit of the DAΦNE dipole magnets, 11 m away from the IP, both on positron and electron sides. The HET sensitive area is made up of a set of 28 plastic scintillators. A dedicated DAQ electronic board, based on a Xilinx Virtex-5 FPGA, has been developed for this detector. It provides a MultiHit TDC with a time resolution of 550(1) ps and the possibility to clearly identify the correct bunch crossing (ΔTbunch ~ 2.7 ns). The most relevant features of the KLOE-2 tagging system operation as time performance, stability and the techniques used to determine the time overlap between the KLOE and HET asynchronous DAQs will be presented.

  17. Instrumentation and optimization of intra-cavity fiber laser gas absorption sensing system

    NASA Astrophysics Data System (ADS)

    Liu, Kun; Liu, Tiegen; Jiang, Junfeng; Liang, Xiao; Zhang, Yimo

    2011-11-01

    Detection of pollution, inflammable, explosive gases such as methane, acetylene, carbon monoxide and so on is very important for many areas, such as environmental, mining and petrochemical industry. Intra-cavity gas absorption sensing technique (ICGAST) based on Erbium-doped fiber ring laser (EDFRL) is one of novel methods for trace gas with higher precision. It has attracted considerable attention, and many research institutes focus on it. Instrumentation and optimization of ICGAST was reported in this paper. The system consists of five parts, which are variable gain module, intelligent frequency-selection module, gas cell, DAQ module and computer respectively. Variable gain module and intelligent frequency-selection module are combined to establish the intra-cavity of the ring laser. Gas cell is used as gas sensor. DAQ module is used to realize data acquisition synchronously. And gas demodulation is finished in the computer finally. The system was optimized by adjusting the sequence of the components. Take experimental simulation as an example, the absorptance of gas was increased five times after optimization, and the sensitivity enhancement factor can reach more than twenty. By using Fabry-Perot (F-P) etalon, the absorption wavelength of the detected gas can be obtained, with error less than 20 pm. The spectra of the detected gas can be swept continuously to obtain several absorption lines in one loop. The coefficient of variation (CV) was used to show the repeatability of gas concentration detection. And results of CV value can be less than 0.014.

  18. A Hardware-in-the-Loop Simulation Platform for the Verification and Validation of Safety Control Systems

    NASA Astrophysics Data System (ADS)

    Rankin, Drew J.; Jiang, Jin

    2011-04-01

    Verification and validation (V&V) of safety control system quality and performance is required prior to installing control system hardware within nuclear power plants (NPPs). Thus, the objective of the hardware-in-the-loop (HIL) platform introduced in this paper is to verify the functionality of these safety control systems. The developed platform provides a flexible simulated testing environment which enables synchronized coupling between the real and simulated world. Within the platform, National Instruments (NI) data acquisition (DAQ) hardware provides an interface between a programmable electronic system under test (SUT) and a simulation computer. Further, NI LabVIEW resides on this remote DAQ workstation for signal conversion and routing between Ethernet and standard industrial signals as well as for user interface. The platform is applied to the testing of a simplified implementation of Canadian Deuterium Uranium (CANDU) shutdown system no. 1 (SDS1) which monitors only the steam generator level of the simulated NPP. CANDU NPP simulation is performed on a Darlington NPP desktop training simulator provided by Ontario Power Generation (OPG). Simplified SDS1 logic is implemented on an Invensys Tricon v9 programmable logic controller (PLC) to test the performance of both the safety controller and the implemented logic. Prior to HIL simulation, platform availability of over 95% is achieved for the configuration used during the V&V of the PLC. Comparison of HIL simulation results to benchmark simulations shows good operational performance of the PLC following a postulated initiating event (PIE).

  19. iSANLA: intelligent sensor and actuator network for life science applications.

    PubMed

    Schloesser, Mario; Schnitzer, Andreas; Ying, Hong; Silex, Carmen; Schiek, Michael

    2008-01-01

    In the fields of neurological rehabilitation and neurophysiological research there is a strong need for miniaturized, multi channel, battery driven, wireless networking DAQ systems enabling real-time digital signal processing and feedback experiments. For the scientific investigation on the passive auditory based 3D-orientation of Barn Owls and the scientific research on vegetative locomotor coordination of Parkinson's disease patients during rehabilitation we developed our 'intelligent Sensor and Actuator Network for Life science Application' (iSANLA) system. Implemented on the ultra low power microcontroller MSP430 sample rates up to 96 kHz have been realised for single channel DAQ. The system includes lossless local data storage up to 4 GB. With its outer dimensions of 20mm per rim and less than 15 g of weight including the Lithium-Ion battery our modular designed sensor node is thoroughly capable of up to eight channel recordings with 8 kHz sample rate each and provides sufficient computational power for digital signal processing ready to start our first mobile experiments. For wireless mobility a compact communication protocol based on the IEEE 802.15.4 wireless standard with net data rates up to 141 kbit/s has been implemented. To merge the lossless acquired data of the distributed iNODEs a time synchronization protocol has been developed preserving causality. Hence the necessary time synchronous start of the data acquisition inside a network of multiple sensors with a precision better than the highest sample rate has been realized.

  20. What are the bias, imprecision, and limits of agreement for finding the flexion-extension plane of the knee with five tibial reference lines?

    PubMed

    Brar, Abheetinder S; Howell, Stephen M; Hull, Maury L

    2016-06-01

    Internal-external (I-E) malrotation of the tibial component is associated with poor function after total knee arthroplasty (TKA). Kinematically aligned (KA) TKA uses a functionally defined flexion-extension (F-E) tibial reference line, which is parallel to the F-E plane of the extended knee, to set I-E rotation of the tibial component. Sixty-two, three-dimensional bone models of normal knees were analyzed. We computed the bias (mean), imprecision (±standard deviation), and limits of agreement (mean±2 standard deviations) of the angle between five anatomically defined tibial reference lines used in mechanically aligned (MA) TKA and the F-E tibial reference line (+external). The following are the bias, imprecision, and limits of agreement of the angle between the F-E tibial reference line and 1) the tibial reference lines connecting the medial border (-2°±6°, -14° to 10°), medial 1/3 (6°±6°, -6° to 18°), and the most anterior point of the tibial tubercle (9°±4°, -1° to 17°) with the center of the posterior cruciate ligament, and 2) the tibial reference lines perpendicular to the posterior condylar axis of the tibia (-3°±4°, -11° to 5°), and a line connecting the centers of the tibial condyles (1°±4°, -7° to 9°). Based on these in vitro findings, it might be prudent to reconsider setting the I-E rotation of the tibial component to tibial reference lines that have bias, imprecision, and limits of agreement that fall outside the -7° to 10° range associated with high function after KA TKA. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Near-Field Integration of a SiN Nanobeam and a SiO2 Microcavity for Heisenberg-Limited Displacement Sensing

    NASA Astrophysics Data System (ADS)

    Schilling, R.; Schütz, H.; Ghadimi, A. H.; Sudhir, V.; Wilson, D. J.; Kippenberg, T. J.

    2016-05-01

    Placing a nanomechanical object in the evanescent near field of a high-Q optical microcavity gives access to strong gradient forces and quantum-limited displacement readout, offering an attractive platform for both precision sensing technology and basic quantum optics research. Robustly implementing this platform is challenging, however, as it requires integrating optically smooth surfaces separated by ≲λ /10 . Here we describe an exceptionally high-cooperativity, single-chip optonanomechanical transducer based on a high-stress Si3N4 nanobeam monolithically integrated into the evanescent near field of SiO2 microdisk cavity. Employing a vertical integration technique based on planarized sacrificial layers, we realize beam-disk gaps as little as 25 nm while maintaining mechanical Q f >1012 Hz and intrinsic optical Q ˜107. The combination of low loss, small gap, and parallel-plane geometry results in radio-frequency flexural modes with vacuum optomechanical coupling rates of 100 kHz, single-photon cooperativities in excess of unity, and large zero-point frequency (displacement) noise amplitudes of 10 kHz (fm )/√ Hz . In conjunction with the high power-handling capacity of SiO2 and low extraneous substrate noise, the transducer performs particularly well as a sensor, with recent deployment in a 4-K cryostat realizing a displacement imprecision 40 dB below that at the standard quantum limit (SQL) and an imprecision-backaction product <5 ℏ [Wilson et al., Nature (London) 524, 325 (2015)]. In this report, we provide a comprehensive description of device design, fabrication, and characterization, with an emphasis on extending Heisenberg-limited readout to room temperature. Towards this end, we describe a room-temperature experiment in which a displacement imprecision 32 dB below that at the SQL and an imprecision-backaction product <60 ℏ is achieved. Our results extend the outlook for measurement-based quantum control of nanomechanical oscillators and suggest an alternative platform for functionally integrated "hybrid" quantum optomechanics.

  2. Uncertainty and Risk Management in Cyber Situational Awareness

    NASA Astrophysics Data System (ADS)

    Li, Jason; Ou, Xinming; Rajagopalan, Raj

    Handling cyber threats unavoidably needs to deal with both uncertain and imprecise information. What we can observe as potential malicious activities can seldom give us 100% confidence on important questions we care about, e.g. what machines are compromised and what damage has been incurred. In security planning, we need information on how likely a vulnerability can lead to a successful compromise to better balance security and functionality, performance, and ease of use. These information are at best qualitative and are often vague and imprecise. In cyber situational awareness, we have to rely on such imperfect information to detect real attacks and to prevent an attack from happening through appropriate risk management. This chapter surveys existing technologies in handling uncertainty and risk management in cyber situational awareness.

  3. Transonic Wind Tunnel Modernization for Experimental Investigation of Dynamic Stall in a Wide Range of Mach Numbers by Plasma Actuators with Combined Energy/Momentum Action

    DTIC Science & Technology

    2015-01-02

    The wind tunnel is fitted with large windows for extended optical access to permit various non intrusive and minimally intrusive diagnostic ...as well as new dielectric and semiconducting surface structures The tunnel test section is built with dielectric walls to avoid electromagnetic ...14 – DAQ transducer cable. 15 – Pitot tube and hot wire sensors free-stream velocity data. Figure 3. New test section. 250×360×600 mm3. 1-inch

  4. The BELLE DAQ system

    NASA Astrophysics Data System (ADS)

    Suzuki, Soh Yamagata; Yamauchi, Masanori; Nakao, Mikihiko; Itoh, Ryosuke; Fujii, Hirofumi

    2000-10-01

    We built a data acquisition system for the BELLE experiment. The system was designed to cope with the average trigger rate up to 500 Hz at the typical event size of 30 kB. This system has five components: (1) the readout sequence controller, (2) the FASTBUS-TDC readout systems using charge-to-time conversion, (3) the barrel shifter event builder, (4) the parallel online computing farm, and (5) the data transfer system to the mass storage. This system has been in operation for physics data taking since June 1999 without serious problems.

  5. The DZERO Level 3 Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Angstadt, R.; Brooijmans, G.; Chapin, D.; Clements, M.; Cutts, D.; Haas, A.; Hauser, R.; Johnson, M.; Kulyavtsev, A.; Mattingly, S. E. K.; Mulders, M.; Padley, P.; Petravick, D.; Rechenmacher, R.; Snyder, S.; Watts, G.

    2004-06-01

    The DZERO experiment began RunII datataking operation at Fermilab in spring 2001. The physics program of the experiment requires the Level 3 data acquisition (DAQ) system system to handle average event sizes of 250 kilobytes at a rate of 1 kHz. The system routes and transfers event fragments of approximately 1-20 kilobytes from 63 VME crate sources to any of approximately 100 processing nodes. It is built upon a Cisco 6509 Ethernet switch, standard PCs, and commodity VME single board computers (SBCs). The system has been in full operation since spring 2002.

  6. Significant figures.

    PubMed

    Badrick, Tony; Hickman, Peter E

    2008-08-01

    * For consistency of reporting the same number of significant figures should be used for results and reference intervals. * The choice of the reporting interval should be based on analytical imprecision (measurement uncertainty).

  7. Fusion of multi-tracer PET images for dose painting.

    PubMed

    Lelandais, Benoît; Ruan, Su; Denœux, Thierry; Vera, Pierre; Gardin, Isabelle

    2014-10-01

    PET imaging with FluoroDesoxyGlucose (FDG) tracer is clinically used for the definition of Biological Target Volumes (BTVs) for radiotherapy. Recently, new tracers, such as FLuoroThymidine (FLT) or FluoroMisonidazol (FMiso), have been proposed. They provide complementary information for the definition of BTVs. Our work is to fuse multi-tracer PET images to obtain a good BTV definition and to help the radiation oncologist in dose painting. Due to the noise and the partial volume effect leading, respectively, to the presence of uncertainty and imprecision in PET images, the segmentation and the fusion of PET images is difficult. In this paper, a framework based on Belief Function Theory (BFT) is proposed for the segmentation of BTV from multi-tracer PET images. The first step is based on an extension of the Evidential C-Means (ECM) algorithm, taking advantage of neighboring voxels for dealing with uncertainty and imprecision in each mono-tracer PET image. Then, imprecision and uncertainty are, respectively, reduced using prior knowledge related to defects in the acquisition system and neighborhood information. Finally, a multi-tracer PET image fusion is performed. The results are represented by a set of parametric maps that provide important information for dose painting. The performances are evaluated on PET phantoms and patient data with lung cancer. Quantitative results show good performance of our method compared with other methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Plasma-equivalent glucose at the point-of-care: evaluation of Roche Accu-Chek Inform and Abbott Precision PCx glucose meters.

    PubMed

    Ghys, Timothy; Goedhuys, Wim; Spincemaille, Katrien; Gorus, Frans; Gerlo, Erik

    2007-01-01

    Glucose testing at the bedside has become an integral part of the management strategy in diabetes and of the careful maintenance of normoglycemia in all patients in intensive care units. We evaluated two point-of-care glucometers for the determination of plasma-equivalent blood glucose. The Precision PCx and the Accu-Chek Inform glucometers were evaluated. Imprecision and bias relative to the Vitros 950 system were determined using protocols of the Clinical Laboratory Standards Institute (CLSI). The effects of low, normal, and high hematocrit levels were investigated. Interference by maltose was also studied. Within-run precision for both instruments ranged from 2-5%. Total imprecision was less than 5% except for the Accu-Chek Inform at the low level (2.9 mmol/L). Both instruments correlated well with the comparison instrument and showed excellent recovery and linearity. Both systems reported at least 95% of their values within zone A of the Clarke Error Grid, and both fulfilled the CLSI quality criteria. The more stringent goals of the American Diabetes Association, however, were not reached. Both systems showed negative bias at high hematocrit levels. Maltose interfered with the glucose measurements on the Accu-Chek Inform but not on the Precision PCx. Both systems showed satisfactory imprecision and were reliable in reporting plasma-equivalent glucose concentrations. The most stringent performance goals were however not met.

  9. Optimal solution for travelling salesman problem using heuristic shortest path algorithm with imprecise arc length

    NASA Astrophysics Data System (ADS)

    Bakar, Sumarni Abu; Ibrahim, Milbah

    2017-08-01

    The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.

  10. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    NASA Astrophysics Data System (ADS)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  11. Predator cognition permits imperfect coral snake mimicry.

    PubMed

    Kikuchi, David W; Pfennig, David W

    2010-12-01

    Batesian mimicry is often imprecise. An underexplored explanation for imperfect mimicry is that predators might not be able to use all dimensions of prey phenotype to distinguish mimics from models and thus permit imperfect mimicry to persist. We conducted a field experiment to test whether or not predators can distinguish deadly coral snakes (Micrurus fulvius) from nonvenomous scarlet kingsnakes (Lampropeltis elapsoides). Although the two species closely resemble one another, the order of colored rings that encircle their bodies differs. Despite this imprecise mimicry, we found that L. elapsoides that match coral snakes in other respects are not under selection to match the ring order of their model. We suggest that L. elapsoides have evolved only those signals necessary to deceive predators. Generally, imperfect mimicry might suffice if it exploits limitations in predator cognitive abilities.

  12. Movement planning reflects skill level and age changes in toddlers

    PubMed Central

    Chen, Yu-ping; Keen, Rachel; Rosander, Kerstin; von Hofsten, Claes

    2010-01-01

    Kinematic measures of children’s reaching were found to reflect stable differences in skill level for planning for future actions. Thirty-five toddlers (18–21 months) were engaged in building block towers (precise task) and in placing blocks into an open container (imprecise task). Sixteen children were re-tested on the same tasks a year later. Longer deceleration as the hand approached the block for pickup was found in the tower task compared to the imprecise task, indicating planning for the second movement. More skillful toddlers who could build high towers had a longer deceleration phase when placing blocks on the tower than toddlers who built low towers. Kinematic differences between the groups remained a year later when all children could build high towers. PMID:21077868

  13. Effect of chemical kinetics uncertainties on calculated constituents in a tropospheric photochemical model

    NASA Technical Reports Server (NTRS)

    Thompson, Anne M.; Stewart, Richard W.

    1991-01-01

    Random photochemical reaction rates are employed in a 1D photochemical model to examine uncertainties in tropospheric concentrations and thereby determine critical kinetic processes and significant correlations. Monte Carlo computations are used to simulate different chemical environments and their related imprecisions. The most critical processes are the primary photodissociation of O3 (which initiates ozone destruction) and NO2 (which initiates ozone formation), and the OH/methane reaction is significant. Several correlations and anticorrelations between species are discussed, and the ozone/transient OH correlation is examined in detail. One important result of the modeling is that estimates of global OH are generally about 25 percent uncertain, limiting the precision of photochemical models. Techniques for reducing the imprecision are discussed which emphasize the use of species and radical species measurements.

  14. A Scheduling Algorithm for Replicated Real-Time Tasks

    NASA Technical Reports Server (NTRS)

    Yu, Albert C.; Lin, Kwei-Jay

    1991-01-01

    We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.

  15. The CMS Data Acquisition - Architectures for the Phase-2 Upgrade

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. F.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Zejdl, P.

    2017-10-01

    The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 1034 cm-2 s -1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. In this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ “data center” are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). This idea presents interesting challenges and its physics potential should be studied.

  16. Real-Time Processing System for the JET Hard X-Ray and Gamma-Ray Profile Monitor Enhancement

    NASA Astrophysics Data System (ADS)

    Fernandes, Ana M.; Pereira, Rita C.; Neto, André; Valcárcel, Daniel F.; Alves, Diogo; Sousa, Jorge; Carvalho, Bernardo B.; Kiptily, Vasily; Syme, Brian; Blanchard, Patrick; Murari, Andrea; Correia, Carlos M. B. A.; Varandas, Carlos A. F.; Gonçalves, Bruno

    2014-06-01

    The Joint European Torus (JET) is currently undertaking an enhancement program which includes tests of relevant diagnostics with real-time processing capabilities for the International Thermonuclear Experimental Reactor (ITER). Accordingly, a new real-time processing system was developed and installed at JET for the gamma-ray and hard X-ray profile monitor diagnostic. The new system is connected to 19 CsI(Tl) photodiodes in order to obtain the line-integrated profiles of the gamma-ray and hard X-ray emissions. Moreover, it was designed to overcome the former data acquisition (DAQ) limitations while exploiting the required real-time features. The new DAQ hardware, based on the Advanced Telecommunication Computer Architecture (ATCA) standard, includes reconfigurable digitizer modules with embedded field-programmable gate array (FPGA) devices capable of acquiring and simultaneously processing data in real-time from the 19 detectors. A suitable algorithm was developed and implemented in the FPGAs, which are able to deliver the corresponding energy of the acquired pulses. The processed data is sent periodically, during the discharge, through the JET real-time network and stored in the JET scientific databases at the end of the pulse. The interface between the ATCA digitizers, the JET control and data acquisition system (CODAS), and the JET real-time network is provided by the Multithreaded Application Real-Time executor (MARTe). The work developed allowed attaining two of the major milestones required by next fusion devices: the ability to process and simultaneously supply high volume data rates in real-time.

  17. The CMS Data Acquisition - Architectures for the Phase-2 Upgrade

    DOE PAGES

    Andre, J-M; Behrens, U.; Branson, J.; ...

    2017-10-01

    The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 10 34 cm -2 s -1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes,more » and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. Here in this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ “data center” are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). This idea presents interesting challenges and its physics potential should be studied.« less

  18. Joint analysis of epistemic and aleatory uncertainty in stability analysis for geo-hazard assessments

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy; Verdel, Thierry

    2017-04-01

    Uncertainty analysis is an unavoidable task of stability analysis of any geotechnical systems. Such analysis usually relies on the safety factor SF (if SF is below some specified threshold), the failure is possible). The objective of the stability analysis is then to estimate the failure probability P for SF to be below the specified threshold. When dealing with uncertainties, two facets should be considered as outlined by several authors in the domain of geotechnics, namely "aleatoric uncertainty" (also named "randomness" or "intrinsic variability") and "epistemic uncertainty" (i.e. when facing "vague, incomplete or imprecise information" such as limited databases and observations or "imperfect" modelling). The benefits of separating both facets of uncertainty can be seen from a risk management perspective because: - Aleatoric uncertainty, being a property of the system under study, cannot be reduced. However, practical actions can be taken to circumvent the potentially dangerous effects of such variability; - Epistemic uncertainty, being due to the incomplete/imprecise nature of available information, can be reduced by e.g., increasing the number of tests (lab or in site survey), improving the measurement methods or evaluating calculation procedure with model tests, confronting more information sources (expert opinions, data from literature, etc.). Uncertainty treatment in stability analysis usually restricts to the probabilistic framework to represent both facets of uncertainty. Yet, in the domain of geo-hazard assessments (like landslides, mine pillar collapse, rockfalls, etc.), the validity of this approach can be debatable. In the present communication, we propose to review the major criticisms available in the literature against the systematic use of probability in situations of high degree of uncertainty. On this basis, the feasibility of using a more flexible uncertainty representation tool is then investigated, namely Possibility distributions (e.g., Baudrit et al., 2007) for geo-hazard assessments. A graphical tool is then developed to explore: 1. the contribution of both types of uncertainty, aleatoric and epistemic; 2. the regions of the imprecise or random parameters which contribute the most to the imprecision on the failure probability P. The method is applied on two case studies (a mine pillar and a steep slope stability analysis, Rohmer and Verdel, 2014) to investigate the necessity for extra data acquisition on parameters whose imprecision can hardly be modelled by probabilities due to the scarcity of the available information (respectively the extraction ratio and the cliff geometry). References Baudrit, C., Couso, I., & Dubois, D. (2007). Joint propagation of probability and possibility in risk analysis: Towards a formal framework. International Journal of Approximate Reasoning, 45(1), 82-105. Rohmer, J., & Verdel, T. (2014). Joint exploration of regional importance of possibilistic and probabilistic uncertainty in stability analysis. Computers and Geotechnics, 61, 308-315.

  19. Results from 15years of quality surveillance for a National Indigenous Point-of-Care Testing Program for diabetes.

    PubMed

    Shephard, Mark; Shephard, Anne; McAteer, Bridgit; Regnier, Tamika; Barancek, Kristina

    2017-12-01

    Diabetes is a major health problem for Australia's Aboriginal and Torres Strait Islander peoples. Point-of-care testing for haemoglobin A1c (HbA1c) has been the cornerstone of a long-standing program (QAAMS) to manage glycaemic control in Indigenous people with diabetes and recently, to diagnose diabetes. The QAAMS quality management framework includes monthly testing of quality control (QC) and external quality assurance (EQA) samples. Key performance indicators of quality include imprecision (coefficient of variation [CV%]) and percentage acceptable results. This paper reports on the past 15years of quality testing in QAAMS and examines the performance of HbA1c POC testing at the 6.5% cut-off recommended for diagnosis. The total number of HbA1c EQA results submitted from 2002 to 2016 was 29,093. The median imprecision for EQA testing by QAAMS device operators averaged 2.81% (SD 0.50; range 2.2 to 3.9%) from 2002 to 2016 and 2.44% (SD 0.22; range 2.2 to 2.9%) from 2009 to 2016. No significant difference was observed between the median imprecision achieved in QAAMS and by Australasian laboratories from 2002 to 2016 (p=0.05; two-tailed paired t-test) and from 2009 to 2016 (p=0.17; two-tailed paired t-test). For QC testing from 2009 to 2016, imprecision averaged 2.5% and 3.0% for the two levels of QC tested. Percentage acceptable results averaged 90% for QA testing from 2002 to 2016 and 96% for QC testing from 2009 to 2016. The DCA Vantage was able to measure a patient and an EQA sample with an HbA1c value close to 6.5% both accurately and precisely. HbA1c POC testing in QAAMS has remained analytically sound, matched the quality achieved by Australasian laboratories and met profession-derived analytical goals for 15years. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. Evaluation of the Dacos 3.0 analyser.

    PubMed

    Pons, J F; Alumá, A; Antoja, F; Biosca, C; Alsina, M J; Galimany, R

    1990-01-01

    The selective multitest Coulter Dacos 3.0 analyser was evaluated according to the guidelines of the Comisión de Instrumentación de la Sociedad Española de Química Clínica and of the European Committee for Clinical Laboratory Standards.THE EVALUATION WAS PERFORMED IN FOUR STEPS: examination of the analytical units; evaluation of routine working; study of interferences; and assessment of practicability.The evaluation included a photometric study. The inaccuracy is acceptable for 340 nm and 420 nm, and the imprecision at absorbances from 0.05 to 2.00 ranged from 0.06 to 0.28% at 340 nm and from 0.06 to 0.08% at 420 nm. The linearity showed some dispersion at low absorbance for PNP at 420 nm and the drift was negligible.The imprecision of the pipette delivery system, the temperature control system and the washing system were satisfactory.IN ROUTINE WORK CONDITIONS, SEVEN ANALYTICAL METHODS WERE STUDIED: glucose, creatinine, iron, total protein, AST, ALP and calcium. Within-run imprecision ranged, at low concentrations, from 0.9% (CV) for glucose, to 7.6% (CV) for iron; at medium concentrations, from 0.7% (CV) for total protein to 5.2% (CV) to creatinine; and at high concentrations, it ranged from 0.6% (CV) for glucose to 3.9% (CV) for ALP.Between-run imprecision at low concentrations ranged from 1.4% (CV) for glucose to 15.1% (CV) for iron; at medium concentrations it ranged from 1.2% (CV) for protein to 6.7% (CV) for iron; and at high concentrations the range is from l.2for AST to 5.7% (CV) for iron.No contamination was found in the sample carry-over study. Some contamination was found in the reagent carry-over study (total protein due to iron and calcium reagents). Relative inaccuracy is good for all the constituents assayed. Only LDH (high and low levels) and urate (low level) showed weak and negative interference caused by turbidity, and gamma-GT (high level) and amylase, bilirubin and ALP (two levels) showed a negative interference caused by haemolysis.

  1. Smart pillow for heart-rate monitoring using a fiber optic sensor

    NASA Astrophysics Data System (ADS)

    Chen, Zhihao; Teo, Ju Teng; Ng, Soon Huat; Yim, Huiqing

    2011-03-01

    In this paper, we propose and demonstrate a new method to monitor heart rate using fiber optic microbending based sensor for in-bed non-intrusive monitoring. The sensing system consists of transmitter, receiver, sensor mat, National Instrument (NI) data acquisition (DAQ) card and a computer for signal processing. The sensor mat is embedded inside a commercial pillow. The heart rate measurement system shows an accuracy of +/-2 beats, which has been successfully demonstrated in a field trial. The key technological advantage of our system is its ability to measure heart rate with no preparation and minimal compliance by the patient.

  2. β-DECAY Studies at Triumf and Future Opportunities with Griffin

    NASA Astrophysics Data System (ADS)

    Garnsworthy, A. B.; Ball, G. C.; Bender, P. C.; Churchman, R.; Close, A.; Glister, J.; Hackman, G.; Ketelhut, S.; Krücken, R.; Sjue, S. K. L.; Tardiff, E.; Garrett, P. E.; Demand, G. A.; Dunlop, R.; Finlay, P.; Hadinia, B.; Leach, K.; Michetti-Wilson, J.; Rand, E. T.; Svensson, C. E.; Andreoiu, C.; Ashley, R.; Chester, A.; Cross, D.; Starosta, K.; Wang, Z.; Zganjar, E. F.

    2013-03-01

    The 8π spectrometer at TRIUMF-ISAC-I and a powerful suite of ancillary detectors support a wide program of research in the fields of nuclear structure, nuclear astrophysics and fundamental symmetries with low-energy radioactive beams.Work is underway to upgrade the Ge detectors and DAQ aspects of the facility to a new state-of-the-art γ-ray spectrometer, GRIFFIN, which will become operational in 2014. GRIFFIN will constitute an increase in the γ-γ efficiency of close to a factor of 300 over the current setup and extend the capabilities for investigations of exotic nuclei produced at ISAC.

  3. Readout and trigger for the AFP detector at ATLAS experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocian, M.

    AFP, the ATLAS Forward Proton consists of silicon detectors at 205 m and 217 m on each side of ATLAS. In 2016 two detectors in one side were installed. The FEI4 chips are read at 160 Mbps over the optical fibers. The DAQ system uses a FPGA board with Artix chip and a mezzanine card with RCE data processing module based on a Zynq chip with ARM processor running ArchLinux. Finally, in this paper we give an overview of the AFP detector with the commissioning steps taken to integrate with the ATLAS TDAQ. Furthermore first performance results are presented.

  4. ALICE inner tracking system readout electronics prototype testing with the CERN "Giga Bit Transceiver''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schambach, Joachim; Rossewij, M. J.; Sielewicz, K. M.

    The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. Furthermore, this contribution describes laboratory and radiation testing results with this prototype board set.

  5. Readout and trigger for the AFP detector at ATLAS experiment

    DOE PAGES

    Kocian, M.

    2017-01-25

    AFP, the ATLAS Forward Proton consists of silicon detectors at 205 m and 217 m on each side of ATLAS. In 2016 two detectors in one side were installed. The FEI4 chips are read at 160 Mbps over the optical fibers. The DAQ system uses a FPGA board with Artix chip and a mezzanine card with RCE data processing module based on a Zynq chip with ARM processor running ArchLinux. Finally, in this paper we give an overview of the AFP detector with the commissioning steps taken to integrate with the ATLAS TDAQ. Furthermore first performance results are presented.

  6. ALICE inner tracking system readout electronics prototype testing with the CERN "Giga Bit Transceiver''

    DOE PAGES

    Schambach, Joachim; Rossewij, M. J.; Sielewicz, K. M.; ...

    2016-12-28

    The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. Furthermore, this contribution describes laboratory and radiation testing results with this prototype board set.

  7. Design of the ANTARES LCM-DAQ board test bench using a FPGA-based system-on-chip approach

    NASA Astrophysics Data System (ADS)

    Anvar, S.; Kestener, P.; Le Provost, H.

    2006-11-01

    The System-on-Chip (SoC) approach consists in using state-of-the-art FPGA devices with embedded RISC processor cores, high-speed differential LVDS links and ready-to-use multi-gigabit transceivers allowing development of compact systems with substantial number of IO channels. Required performances are obtained through a subtle separation of tasks between closely cooperating programmable hardware logic and user-friendly software environment. We report about our experience in using the SoC approach for designing the production test bench of the off-shore readout system for the ANTARES neutrino experiment.

  8. ALICE inner tracking system readout electronics prototype testing with the CERN ``Giga Bit Transceiver''

    NASA Astrophysics Data System (ADS)

    Schambach, J.; Rossewij, M. J.; Sielewicz, K. M.; Aglieri Rinella, G.; Bonora, M.; Ferencei, J.; Giubilato, P.; Vanat, T.

    2016-12-01

    The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. This contribution describes laboratory and radiation testing results with this prototype board set.

  9. Neurochemical evidence that cocaine- and amphetamine-regulated transcript (CART) 55-102 peptide modulates the dopaminergic reward system by decreasing the dopamine release in the mouse nucleus accumbens.

    PubMed

    Rakovska, Angelina; Baranyi, Maria; Windisch, Katalin; Petkova-Kirova, Polina; Gagov, Hristo; Kalfin, Reni

    2017-09-01

    CART (Cocaine- and Amphetamine-Regulated Transcript) peptide is a neurotransmitter naturally occurring in the CNS and found mostly in nucleus accumbens, ventrotegmental area, ventral pallidum, amygdalae and striatum, brain regions associated with drug addiction. In the nucleus accumbens, known for its significant role in motivation, pleasure, reward and reinforcement learning, CART peptide inhibits cocaine and amphetamine-induced dopamine-mediated increases in locomotor activity and behavior, suggesting a CART peptide interaction with the dopaminergic system. Thus in the present study, we examined the effect of CART (55-102) peptide on the basal, electrical field stimulation-evoked (EFS-evoked) (30V, 2Hz, 120 shocks) and returning basal dopamine (DA) release and on the release of the DA metabolites 3,4-dihydroxyphenyl acetaldehyde (DOPAL), 3,4-dihydroxyphenylacetic acid (DOPAC), homovanillic acid (HVA), 3,4-dihydroxyphenylethanol (DOPET), 3-methoxytyramine (3-MT) as well as on norepinephrine (NE) and dopamine-o-quinone (Daq) in isolated mouse nucleus accumbens, in a preparation, in which any CART peptide effects on the dendrites or soma of ventral tegmental projection neurons have been excluded. We further extended our study to assess the effect of CART (55-102) peptide on basal cocaine-induced release of dopamine and its metabolites DOPAL, DOPAC, HVA, DOPET and 3-MT as well as on NE and Daq. To analyze the amount of [ 3 H]dopamine, dopamine metabolites, Daq and NE in the nucleus accumbens superfusate, a high-pressure liquid chromatography (HPLC), coupled with electrochemical, UV and radiochemical detections was used. CART (55-102) peptide, 0.1μM, added alone, exerted: (i) a significant decrease in the basal and EFS-evoked levels of extracellular dopamine (ii) a significant increase in the EFS-evoked and returning basal levels of the dopamine metabolites DOPAC and HVA, major products of dopamine degradation and (iii) a significant decrease in the returning basal levels of DOPET. At the same concentration, 0.1μM, CART (55-102) peptide did not have any effect on the release of noradrenaline. In the presence of CART (55-102) peptide, 0.1μM, the effect of cocaine, 30μM, on the basal dopamine release was inhibited and the effect on the basal DOPAC release substantially increased. To our knowledge, our findings are the first to show direct neurochemical evidence that CART (55-102) peptide plays a neuromodulatory role on the dopaminergic reward system by decreasing dopamine in the mouse nucleus accumbens and by attenuating cocaine-induced effects on dopamine release. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Advanced Operating System Technologies

    NASA Astrophysics Data System (ADS)

    Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro

    In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC. Our work started in the second half of 1994, with a research agreement between CERN and Chorus Systemes (France), world leader in the micro-kernel OS technology. The Chorus OS is targeted to distributed real-time applications, and it can very efficiently support different "OS personalities" in the same environment, like Posix, UNIX, and a CORBA compliant distributed object architecture. Projects are being set-up to verify the suitability of our work for LHC applications, we are building a scaled-down prototype of the DAQ system foreseen for the CMS experiment at LHC, where we will directly test our protocols and where we will be able to make measurements and benchmarks, guiding our development and allowing us to build an analytical model of the system, suitable for simulation and large scale verification.

  11. COMMUNICATING PROBABILISTIC RISK OUTCOMES TO RISK MANAGERS

    EPA Science Inventory

    Increasingly, risk assessors are moving away from simple deterministic assessments to probabilistic approaches that explicitly incorporate ecological variability, measurement imprecision, and lack of knowledge (collectively termed "uncertainty"). While the new methods provide an...

  12. Knowledge representation in fuzzy logic

    NASA Technical Reports Server (NTRS)

    Zadeh, Lotfi A.

    1989-01-01

    The author presents a summary of the basic concepts and techniques underlying the application of fuzzy logic to knowledge representation. He then describes a number of examples relating to its use as a computational system for dealing with uncertainty and imprecision in the context of knowledge, meaning, and inference. It is noted that one of the basic aims of fuzzy logic is to provide a computational framework for knowledge representation and inference in an environment of uncertainty and imprecision. In such environments, fuzzy logic is effective when the solutions need not be precise and/or it is acceptable for a conclusion to have a dispositional rather than categorical validity. The importance of fuzzy logic derives from the fact that there are many real-world applications which fit these conditions, especially in the realm of knowledge-based systems for decision-making and control.

  13. Self-calibration for lensless color microscopy.

    PubMed

    Flasseur, Olivier; Fournier, Corinne; Verrier, Nicolas; Denis, Loïc; Jolivet, Frédéric; Cazier, Anthony; Lépine, Thierry

    2017-05-01

    Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.

  14. Three-quarter views are subjectively good because object orientation is uncertain.

    PubMed

    Niimi, Ryosuke; Yokosawa, Kazuhiko

    2009-04-01

    Because the objects that surround us are three-dimensional, their appearance and our visual perception of them change depending on an object's orientation relative to a viewpoint. One of the most remarkable effects of object orientation is that viewers prefer three-quarter views over others, such as front and back, but the exact source of this preference has not been firmly established. We show that object orientation perception of the three-quarter view is relatively imprecise and that this impreciseness is related to preference for this view. Human vision is largely insensitive to variations among different three-quarter views (e.g., 45 degrees vs. 50 degrees ); therefore, the three-quarter view is perceived as if it corresponds to a wide range of orientations. In other words, it functions as the typical representation of the object.

  15. Grey fuzzy optimization model for water quality management of a river system

    NASA Astrophysics Data System (ADS)

    Karmakar, Subhankar; Mujumdar, P. P.

    2006-07-01

    A grey fuzzy optimization model is developed for water quality management of river system to address uncertainty involved in fixing the membership functions for different goals of Pollution Control Agency (PCA) and dischargers. The present model, Grey Fuzzy Waste Load Allocation Model (GFWLAM), has the capability to incorporate the conflicting goals of PCA and dischargers in a deterministic framework. The imprecision associated with specifying the water quality criteria and fractional removal levels are modeled in a fuzzy mathematical framework. To address the imprecision in fixing the lower and upper bounds of membership functions, the membership functions themselves are treated as fuzzy in the model and the membership parameters are expressed as interval grey numbers, a closed and bounded interval with known lower and upper bounds but unknown distribution information. The model provides flexibility for PCA and dischargers to specify their aspirations independently, as the membership parameters for different membership functions, specified for different imprecise goals are interval grey numbers in place of a deterministic real number. In the final solution optimal fractional removal levels of the pollutants are obtained in the form of interval grey numbers. This enhances the flexibility and applicability in decision-making, as the decision-maker gets a range of optimal solutions for fixing the final decision scheme considering technical and economic feasibility of the pollutant treatment levels. Application of the GFWLAM is illustrated with case study of the Tunga-Bhadra river system in India.

  16. High skill in low-frequency climate response through fluctuation dissipation theorems despite structural instability.

    PubMed

    Majda, Andrew J; Abramov, Rafail; Gershgorin, Boris

    2010-01-12

    Climate change science focuses on predicting the coarse-grained, planetary-scale, longtime changes in the climate system due to either changes in external forcing or internal variability, such as the impact of increased carbon dioxide. The predictions of climate change science are carried out through comprehensive, computational atmospheric, and oceanic simulation models, which necessarily parameterize physical features such as clouds, sea ice cover, etc. Recently, it has been suggested that there is irreducible imprecision in such climate models that manifests itself as structural instability in climate statistics and which can significantly hamper the skill of computer models for climate change. A systematic approach to deal with this irreducible imprecision is advocated through algorithms based on the Fluctuation Dissipation Theorem (FDT). There are important practical and computational advantages for climate change science when a skillful FDT algorithm is established. The FDT response operator can be utilized directly for multiple climate change scenarios, multiple changes in forcing, and other parameters, such as damping and inverse modelling directly without the need of running the complex climate model in each individual case. The high skill of FDT in predicting climate change, despite structural instability, is developed in an unambiguous fashion using mathematical theory as guidelines in three different test models: a generic class of analytical models mimicking the dynamical core of the computer climate models, reduced stochastic models for low-frequency variability, and models with a significant new type of irreducible imprecision involving many fast, unstable modes.

  17. Virtual tape measure for the operating microscope: system specifications and performance evaluation.

    PubMed

    Kim, M Y; Drake, J M; Milgram, P

    2000-01-01

    The Virtual Tape Measure for the Operating Microscope (VTMOM) was created to assist surgeons in making accurate 3D measurements of anatomical structures seen in the surgical field under the operating microscope. The VTMOM employs augmented reality techniques by combining stereoscopic video images with stereoscopic computer graphics, and functions by relying on an operator's ability to align a 3D graphic pointer, which serves as the end-point of the virtual tape measure, with designated locations on the anatomical structure being measured. The VTMOM was evaluated for its baseline and application performances as well as its application efficacy. Baseline performance was determined by measuring the mean error (bias) and standard deviation of error (imprecision) in measurements of non-anatomical objects. Application performance was determined by comparing the error in measuring the dimensions of aneurysm models with and without the VTMOM. Application efficacy was determined by comparing the error in selecting the appropriate aneurysm clip size with and without the VTMOM. Baseline performance indicated a bias of 0.3 mm and an imprecision of 0.6 mm. Application bias was 3.8 mm and imprecision was 2.8 mm for aneurysm diameter. The VTMOM did not improve aneurysm clip size selection accuracy. The VTMOM is a potentially accurate tool for use under the operating microscope. However, its performance when measuring anatomical objects is highly dependent on complex visual features of the object surfaces. Copyright 2000 Wiley-Liss, Inc.

  18. An Expert System for Identification of Minerals in Thin Section.

    ERIC Educational Resources Information Center

    Donahoe, James Louis; And Others

    1989-01-01

    Discusses a computer database which includes optical properties of 142 minerals. Uses fuzzy logic to identify minerals from incomplete and imprecise information. Written in Turbo PASCAL for MS-DOS with 128K. (MVL)

  19. Digitally controlled twelve-pulse firing generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berde, D.; Ferrara, A.A.

    1981-01-01

    Control System Studies for the Tokamak Fusion Test Reactor (TFTR) indicate that accurate thyristor firing in the AC-to-DC conversion system is required in order to achieve good regulation of the various field currents. Rapid update and exact firing angle control are required to avoid instabilities, large eddy currents, or parasitic oscillations. The Prototype Firing Generator was designed to satisfy these requirements. To achieve the required /plus or minus/0.77/degree/firing accuracy, a three-phase-locked loop reference was designed; otherwise, the Firing Generator employs digital circuitry. The unit, housed in a standard CAMAC crate, operates under microcomputer control. Functions are performed under program control,more » which resides in nonvolatile read-only memory. Communication with CICADA control system is provided via an 11-bit parallel interface.« less

  20. A real-time data-acquisition and analysis system with distributed UNIX workstations

    NASA Astrophysics Data System (ADS)

    Yamashita, H.; Miyamoto, K.; Maruyama, K.; Hirosawa, H.; Nakayoshi, K.; Emura, T.; Sumi, Y.

    1996-02-01

    A compact data-acquisition system using three RISC/UNIX™ workstations (SUN™/SPARCstation™) with real-time capabilities of monitoring and analysis has been developed for the study of photonuclear reactions with the large-acceptance spectrometer TAGX. One workstation acquires data from memory modules in the front-end electronics (CAMAC and TKO) with a maximum speed of 300 Kbytes/s, where data size times instantaneous rate is 1 Kbyte × 300 Hz. Another workstation, which has real-time capability for run monitoring, gets the data with a buffer manager called NOVA. The third workstation analyzes the data and reconstructs the event. In addition to a general hardware and software description, priority settings and run control by shell scripts are described. This system has recently been used successfully in a two month long experiment.

  1. Data acquisition and processing system for the HT-6M tokamak fusion experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Y.T.; Liu, G.C.; Pang, J.Q.

    1987-08-01

    This paper describes a high-speed data acquisition and processing system which has been successfully operated on the HT-6M tokamak fusion experimental device. The system collects, archives and analyzes up to 512 kilobytes of data from each shot of the experiment. A shot lasts 50-150 milliseconds and occurs every 5-10 minutes. The system consists of two PDP-11/24 computer systems. One PDP-11/24 is used for real-time data taking and on-line data analysis. It is based upon five CAMAC crates organized into a parallel branch. Another PDP-11/24 is used for off-line data processing. Both data acquisition software RSX-DAS and data processing software RSX-DAPmore » have modular, multi-tasking and concurrent processing features.« less

  2. A data transmission method for particle physics experiments based on Ethernet physical layer

    NASA Astrophysics Data System (ADS)

    Huang, Xi-Ru; Cao, Ping; Zheng, Jia-Jun

    2015-11-01

    Due to its advantages of universality, flexibility and high performance, fast Ethernet is widely used in readout system design for modern particle physics experiments. However, Ethernet is usually used together with the TCP/IP protocol stack, which makes it difficult to implement readout systems because designers have to use the operating system to process this protocol. Furthermore, TCP/IP degrades the transmission efficiency and real-time performance. To maximize the performance of Ethernet in physics experiment applications, a data readout method based on the physical layer (PHY) is proposed. In this method, TCP/IP is replaced with a customized and simple protocol, which makes it easier to implement. On each readout module, data from the front-end electronics is first fed into an FPGA for protocol processing and then sent out to a PHY chip controlled by this FPGA for transmission. This kind of data path is fully implemented by hardware. From the side of the data acquisition system (DAQ), however, the absence of a standard protocol causes problems for the network related applications. To solve this problem, in the operating system kernel space, data received by the network interface card is redirected from the traditional flow to a specified memory space by a customized program. This memory space can easily be accessed by applications in user space. For the purpose of verification, a prototype system has been designed and implemented. Preliminary test results show that this method can meet the requirements of data transmission from the readout module to the DAQ with an efficient and simple manner. Supported by National Natural Science Foundation of China (11005107) and Independent Projects of State Key Laboratory of Particle Detection and Electronics (201301)

  3. Development and validation of the Overlap Muon Track Finder for the CMS experiment

    NASA Astrophysics Data System (ADS)

    Dobosz, J.; Mietki, P.; Zawistowski, K.; Żarnecki, G.

    2016-09-01

    Present article is a description of the authors contribution in upgrade and analysis of performance of the Level-1 Muon Trigger of the CMS experiment. The authors are students of University of Warsaw and Gdansk University of Technology. They are collaborating with the CMS Warsaw Group. This article summarises students' work presented during the Students session during the Workshop XXXVIII-th IEEE-SPIE Joint Symposium Wilga 2016. In the first section the CMS experiment is briefly described and the importance of the trigger system is explained. There is also shown basic difference between old muon trigger strategy and the upgraded one. The second section is devoted to Overlap Muon Track Finder (OMTF). This is one of the crucial components of the Level-1 Muon Trigger. The algorithm of OMTF is described. In the third section there is discussed one of the event selection aspects - cut on the muon transverse momentum pT . Sometimes physical muon with pT bigger than a certain threshold is unnecessarily cut and physical muon with lower pT survives. To improve pT selection modified algorithm was proposed and its performance was studied. One of the features of the OMTF is that one physical muon often results in several muon candidates. The Ghost-Buster algorithm is designed to eliminate surplus candidates. In the fourth section this algorithm and its performance on different data samples are discussed. In the fifth section Local Data Acquisition System (Local DAQ) is briefly described. It supports initial system commissioning. The test done with OMTF Local DAQ are described. In the sixth section there is described development of web application used for the control and monitoring of CMS electronics. The application provides access to graphical user interface for manual control and the connection to the CMS hierarchical Run Control.

  4. Non-adiabatic holonomic quantum computation in linear system-bath coupling

    PubMed Central

    Sun, Chunfang; Wang, Gangcheng; Wu, Chunfeng; Liu, Haodi; Feng, Xun-Li; Chen, Jing-Ling; Xue, Kang

    2016-01-01

    Non-adiabatic holonomic quantum computation in decoherence-free subspaces protects quantum information from control imprecisions and decoherence. For the non-collective decoherence that each qubit has its own bath, we show the implementations of two non-commutable holonomic single-qubit gates and one holonomic nontrivial two-qubit gate that compose a universal set of non-adiabatic holonomic quantum gates in decoherence-free-subspaces of the decoupling group, with an encoding rate of . The proposed scheme is robust against control imprecisions and the non-collective decoherence, and its non-adiabatic property ensures less operation time. We demonstrate that our proposed scheme can be realized by utilizing only two-qubit interactions rather than many-qubit interactions. Our results reduce the complexity of practical implementation of holonomic quantum computation in experiments. We also discuss the physical implementation of our scheme in coupled microcavities. PMID:26846444

  5. Non-adiabatic holonomic quantum computation in linear system-bath coupling.

    PubMed

    Sun, Chunfang; Wang, Gangcheng; Wu, Chunfeng; Liu, Haodi; Feng, Xun-Li; Chen, Jing-Ling; Xue, Kang

    2016-02-05

    Non-adiabatic holonomic quantum computation in decoherence-free subspaces protects quantum information from control imprecisions and decoherence. For the non-collective decoherence that each qubit has its own bath, we show the implementations of two non-commutable holonomic single-qubit gates and one holonomic nontrivial two-qubit gate that compose a universal set of non-adiabatic holonomic quantum gates in decoherence-free-subspaces of the decoupling group, with an encoding rate of (N - 2)/N. The proposed scheme is robust against control imprecisions and the non-collective decoherence, and its non-adiabatic property ensures less operation time. We demonstrate that our proposed scheme can be realized by utilizing only two-qubit interactions rather than many-qubit interactions. Our results reduce the complexity of practical implementation of holonomic quantum computation in experiments. We also discuss the physical implementation of our scheme in coupled microcavities.

  6. A Multipixel Time Series Analysis Method Accounting for Ground Motion, Atmospheric Noise, and Orbital Errors

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2018-02-01

    Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.

  7. ELICIT: An alternative imprecise weight elicitation technique for use in multi-criteria decision analysis for healthcare.

    PubMed

    Diaby, Vakaramoko; Sanogo, Vassiki; Moussa, Kouame Richard

    2016-01-01

    In this paper, the readers are introduced to ELICIT, an imprecise weight elicitation technique for multicriteria decision analysis for healthcare. The application of ELICIT consists of two steps: the rank ordering of evaluation criteria based on decision-makers' (DMs) preferences using the principal component analysis; and the estimation of criteria weights and their descriptive statistics using the variable interdependent analysis and the Monte Carlo method. The application of ELICIT is illustrated with a hypothetical case study involving the elicitation of weights for five criteria used to select the best device for eye surgery. The criteria were ranked from 1-5, based on a strict preference relationship established by the DMs. For each criterion, the deterministic weight was estimated as well as the standard deviation and 95% credibility interval. ELICIT is appropriate in situations where only ordinal DMs' preferences are available to elicit decision criteria weights.

  8. Coupled oscillators in identification of nonlinear damping of a real parametric pendulum

    NASA Astrophysics Data System (ADS)

    Olejnik, Paweł; Awrejcewicz, Jan

    2018-01-01

    A damped parametric pendulum with friction is identified twice by means of its precise and imprecise mathematical model. A laboratory test stand designed for experimental investigations of nonlinear effects determined by a viscous resistance and the stick-slip phenomenon serves as the model mechanical system. An influence of accurateness of mathematical modeling on the time variability of the nonlinear damping coefficient of the oscillator is proved. A free decay response of a precisely and imprecisely modeled physical pendulum is dependent on two different time-varying coefficients of damping. The coefficients of the analyzed parametric oscillator are identified with the use of a new semi-empirical method based on a coupled oscillators approach, utilizing the fractional order derivative of the discrete measurement series treated as an input to the numerical model. Results of application of the proposed method of identification of the nonlinear coefficients of the damped parametric oscillator have been illustrated and extensively discussed.

  9. Does Imprecision in The Waggle Dance Fit Patterns Predicted by The Tuned-Error Hypothesis?

    PubMed

    Tanner, David A; Visscher, P Kirk

    2010-05-01

    The waggle dance of the honey bee is used to recruit nest mates to a resource, though direction indicated for a resource may vary greatly within a single dance. Some authors suggest that this variation exits as an adaptation to distribute recruits across a patch of flowers, and that, due to the variation's inverse relationship with distance, the shape of the recruit distribution will remain constant for resources at different distances. In this study, we test this hypothesis by examining how variation in the indication of direction and distance changes with respect to distance. We find that imprecision in the communication of direction does not diminish rapidly enough to accommodate an adaptive-error hypothesis, and we also find that variation in the indication of distance has a positive relationship with the distance of a resource from the hive.

  10. Does Imprecision in The Waggle Dance Fit Patterns Predicted by The Tuned-Error Hypothesis?

    PubMed Central

    Visscher, P. Kirk

    2010-01-01

    The waggle dance of the honey bee is used to recruit nest mates to a resource, though direction indicated for a resource may vary greatly within a single dance. Some authors suggest that this variation exits as an adaptation to distribute recruits across a patch of flowers, and that, due to the variation’s inverse relationship with distance, the shape of the recruit distribution will remain constant for resources at different distances. In this study, we test this hypothesis by examining how variation in the indication of direction and distance changes with respect to distance. We find that imprecision in the communication of direction does not diminish rapidly enough to accommodate an adaptive-error hypothesis, and we also find that variation in the indication of distance has a positive relationship with the distance of a resource from the hive. PMID:20414338

  11. Mammographic mass classification based on possibility theory

    NASA Astrophysics Data System (ADS)

    Hmida, Marwa; Hamrouni, Kamel; Solaiman, Basel; Boussetta, Sana

    2017-03-01

    Shape and margin features are very important for differentiating between benign and malignant masses in mammographic images. In fact, benign masses are usually round and oval and have smooth contours. However, malignant tumors have generally irregular shape and appear lobulated or speculated in margins. This knowledge suffers from imprecision and ambiguity. Therefore, this paper deals with the problem of mass classification by using shape and margin features while taking into account the uncertainty linked to the degree of truth of the available information and the imprecision related to its content. Thus, in this work, we proposed a novel mass classification approach which provides a possibility based representation of the extracted shape features and builds a possibility knowledge basis in order to evaluate the possibility degree of malignancy and benignity for each mass. For experimentation, the MIAS database was used and the classification results show the great performance of our approach in spite of using simple features.

  12. Extraction of decision rules via imprecise probabilities

    NASA Astrophysics Data System (ADS)

    Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.

    2017-05-01

    Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.

  13. Fuzzy logic based expert system for the treatment of mobile tooth.

    PubMed

    Mago, Vijay Kumar; Mago, Anjali; Sharma, Poonam; Mago, Jagmohan

    2011-01-01

    The aim of this research work is to design an expert system to assist dentist in treating the mobile tooth. There is lack of consistency among dentists in choosing the treatment plan. Moreover, there is no expert system currently available to verify and support such decision making in dentistry. A Fuzzy Logic based expert system has been designed to accept imprecise and vague values of dental sign-symptoms related to mobile tooth and the system suggests treatment plan(s). The comparison of predictions made by the system with those of the dentist is conducted. Chi-square Test of homogeneity is conducted and it is found that the system is capable of predicting accurate results. With this system, dentist feels more confident while planning the treatment of mobile tooth as he can verify his decision with the expert system. The authors also argue that Fuzzy Logic provides an appropriate mechanism to handle imprecise values of dental domain.

  14. Assessing the likely value of gravity and drawdown measurements to constrain estimates of hydraulic conductivity and specific yield during unconfined aquifer testing

    USGS Publications Warehouse

    Blainey, Joan B.; Ferré, Ty P.A.; Cordova, Jeffrey T.

    2007-01-01

    Pumping of an unconfined aquifer can cause local desaturation detectable with high‐resolution gravimetry. A previous study showed that signal‐to‐noise ratios could be predicted for gravity measurements based on a hydrologic model. We show that although changes should be detectable with gravimeters, estimations of hydraulic conductivity and specific yield based on gravity data alone are likely to be unacceptably inaccurate and imprecise. In contrast, a transect of low‐quality drawdown data alone resulted in accurate estimates of hydraulic conductivity and inaccurate and imprecise estimates of specific yield. Combined use of drawdown and gravity data, or use of high‐quality drawdown data alone, resulted in unbiased and precise estimates of both parameters. This study is an example of the value of a staged assessment regarding the likely significance of a new measurement method or monitoring scenario before collecting field data.

  15. Advanced Water Vapor Lidar Detection System

    NASA Technical Reports Server (NTRS)

    Elsayed-Ali, Hani

    1998-01-01

    In the present water vapor lidar system, the detected signal is sent over long cables to a waveform digitizer in a CAMAC crate. This has the disadvantage of transmitting analog signals for a relatively long distance, which is subjected to pickup noise, leading to a decrease in the signal to noise ratio. Generally, errors in the measurement of water vapor with the DIAL method arise from both random and systematic sources. Systematic errors in DIAL measurements are caused by both atmospheric and instrumentation effects. The selection of the on-line alexandrite laser with a narrow linewidth, suitable intensity and high spectral purity, and its operation at the center of the water vapor lines, ensures minimum influence in the DIAL measurement that are caused by the laser spectral distribution and avoid system overloads. Random errors are caused by noise in the detected signal. Variability of the photon statistics in the lidar return signal, noise resulting from detector dark current, and noise in the background signal are the main sources of random error. This type of error can be minimized by maximizing the signal to noise ratio. The increase in the signal to noise ratio can be achieved by several ways. One way is to increase the laser pulse energy, by increasing its amplitude or the pulse repetition rate. Another way, is to use a detector system with higher quantum efficiency and lower noise, on the other hand, the selection of a narrow band optical filter that rejects most of the day background light and retains high optical efficiency is an important issue. Following acquisition of the lidar data, we minimize random errors in the DIAL measurement by averaging the data, but this will result in the reduction of the vertical and horizontal resolutions. Thus, a trade off is necessary to achieve a balance between the spatial resolution and the measurement precision. Therefore, the main goal of this research effort is to increase the signal to noise ratio by a factor of 10 over the current system, using a newly evaluated, very low noise avalanche photo diode detector and constructing a 10 MHz waveform digitizer which will replace the current CAMAC system.

  16. Decomposition of Sources of Errors in Seasonal Streamflow Forecasts in a Rainfall-Runoff Dominated Basin

    NASA Astrophysics Data System (ADS)

    Sinha, T.; Arumugam, S.

    2012-12-01

    Seasonal streamflow forecasts contingent on climate forecasts can be effectively utilized in updating water management plans and optimize generation of hydroelectric power. Streamflow in the rainfall-runoff dominated basins critically depend on forecasted precipitation in contrast to snow dominated basins, where initial hydrological conditions (IHCs) are more important. Since precipitation forecasts from Atmosphere-Ocean-General Circulation Models are available at coarse scale (~2.8° by 2.8°), spatial and temporal downscaling of such forecasts are required to implement land surface models, which typically runs on finer spatial and temporal scales. Consequently, multiple sources are introduced at various stages in predicting seasonal streamflow. Therefore, in this study, we addresses the following science questions: 1) How do we attribute the errors in monthly streamflow forecasts to various sources - (i) model errors, (ii) spatio-temporal downscaling, (iii) imprecise initial conditions, iv) no forecasts, and (iv) imprecise forecasts? and 2) How does monthly streamflow forecast errors propagate with different lead time over various seasons? In this study, the Variable Infiltration Capacity (VIC) model is calibrated over Apalachicola River at Chattahoochee, FL in the southeastern US and implemented with observed 1/8° daily forcings to estimate reference streamflow during 1981 to 2010. The VIC model is then forced with different schemes under updated IHCs prior to forecasting period to estimate relative mean square errors due to: a) temporally disaggregation, b) spatial downscaling, c) Reverse Ensemble Streamflow Prediction (imprecise IHCs), d) ESP (no forecasts), and e) ECHAM4.5 precipitation forecasts. Finally, error propagation under different schemes are analyzed with different lead time over different seasons.

  17. Verification of examination procedures in clinical laboratory for imprecision, trueness and diagnostic accuracy according to ISO 15189:2012: a pragmatic approach.

    PubMed

    Antonelli, Giorgia; Padoan, Andrea; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-08-28

    Background The International Standard ISO 15189 is recognized as a valuable guide in ensuring high quality clinical laboratory services and promoting the harmonization of accreditation programmes in laboratory medicine. Examination procedures must be verified in order to guarantee that their performance characteristics are congruent with the intended scope of the test. The aim of the present study was to propose a practice model for implementing procedures employed for the verification of validated examination procedures already used for at least 2 years in our laboratory, in agreement with the ISO 15189 requirement at the Section 5.5.1.2. Methods In order to identify the operative procedure to be used, approved documents were identified, together with the definition of performance characteristics to be evaluated for the different methods; the examination procedures used in laboratory were analyzed and checked for performance specifications reported by manufacturers. Then, operative flow charts were identified to compare the laboratory performance characteristics with those declared by manufacturers. Results The choice of performance characteristics for verification was based on approved documents used as guidance, and the specific purpose tests undertaken, a consideration being made of: imprecision and trueness for quantitative methods; diagnostic accuracy for qualitative methods; imprecision together with diagnostic accuracy for semi-quantitative methods. Conclusions The described approach, balancing technological possibilities, risks and costs and assuring the compliance of the fundamental component of result accuracy, appears promising as an easily applicable and flexible procedure helping laboratories to comply with the ISO 15189 requirements.

  18. Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety.

    PubMed

    Yago, Martín

    2017-05-01

    QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.

  19. An approach for estimating measurement uncertainty in medical laboratories using data from long-term quality control and external quality assessment schemes.

    PubMed

    Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-10-26

    The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.

  20. Therapeutic drug monitoring of infliximab: performance evaluation of three commercial ELISA kits.

    PubMed

    Schmitz, Ellen M H; van de Kerkhof, Daan; Hamann, Dörte; van Dongen, Joost L J; Kuijper, Philip H M; Brunsveld, Luc; Scharnhorst, Volkher; Broeren, Maarten A C

    2016-07-01

    Therapeutic drug monitoring (TDM) of infliximab (IFX, Remicade®) can aid to optimize therapy efficacy. Many assays are available for this purpose. However, a reference standard is lacking. Therefore, we evaluated the analytical performance, agreement and clinically relevant differences of three commercially available IFX ELISA kits on an automated processing system. The kits of Theradiag (Lisa Tracker Infliximab), Progenika (Promonitor IFX) and apDia (Infliximab ELISA) were implemented on an automated processing system. Imprecision was determined by triplicate measurements of patient samples on five days. Agreement was evaluated by analysis of 30 patient samples and four spiked samples by the selected ELISA kits and the in-house IFX ELISA of Sanquin Diagnostics (Amsterdam, The Netherlands). Therapeutic consequences were evaluated by dividing patients into four treatment groups using cut-off levels of 1, 3 and 7 μg/mL and determining assay concordance. Within-run and between-run imprecision were acceptable (≤12% and ≤17%, respectively) within the quantification range of the selected ELISA kits. The apDia assay had the best precision and agreement to target values. Statistically significant differences were found between all assays except between Sanquin Diagnostics and the Lisa Tracker assay. The Promonitor assay measured the lowest IFX concentrations, the apDia assay the highest. When patients were classified in four treatment categories, 70% concordance was achieved. Although all assays are suitable for TDM, significant differences were observed in both imprecision and agreement. Therapeutic consequences were acceptable when patients were divided in treatment categories, but this could be improved by assay standardization.

  1. Expressing analytical performance from multi-sample evaluation in laboratory EQA.

    PubMed

    Thelen, Marc H M; Jansen, Rob T P; Weykamp, Cas W; Steigstra, Herman; Meijer, Ron; Cobbaert, Christa M

    2017-08-28

    To provide its participants with an external quality assessment system (EQAS) that can be used to check trueness, the Dutch EQAS organizer, Organization for Quality Assessment of Laboratory Diagnostics (SKML), has innovated its general chemistry scheme over the last decade by introducing fresh frozen commutable samples whose values were assigned by Joint Committee for Traceability in Laboratory Medicine (JCTLM)-listed reference laboratories using reference methods where possible. Here we present some important innovations in our feedback reports that allow participants to judge whether their trueness and imprecision meet predefined analytical performance specifications. Sigma metrics are used to calculate performance indicators named 'sigma values'. Tolerance intervals are based on both Total Error allowable (TEa) according to biological variation data and state of the art (SA) in line with the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Milan consensus. The existing SKML feedback reports that express trueness as the agreement between the regression line through the results of the last 12 months and the values obtained from reference laboratories and calculate imprecision from the residuals of the regression line are now enriched with sigma values calculated from the degree to which the combination of trueness and imprecision are within tolerance limits. The information and its conclusion to a simple two-point scoring system are also graphically represented in addition to the existing difference plot. By adding sigma metrics-based performance evaluation in relation to both TEa and SA tolerance intervals to its EQAS schemes, SKML provides its participants with a powerful and actionable check on accuracy.

  2. Slow climate velocities of mountain streams portend their role as refugia

    Science.gov Websites

    result in imprecise estimates of network extent and flow dynamics (41). The problem is most acute in radiation-induced xerostomia by protecting salivary stem cells from toxic aldehydes Show more Ecology

  3. Rosuvastatin for cardiovascular prevention: too many uncertainties.

    PubMed

    2009-08-01

    A randomised trial showed that rosuvastatin had some efficacy in preventing a first cardiovascular event, but there was an increased risk of diabetes. The article describing this study is too imprecise to recommend the use of rosuvastatin in this setting.

  4. Approximate Reasoning: Past, Present, Future

    DTIC Science & Technology

    1990-06-27

    This note presents a personal view of the state of the art in the representation and manipulation of imprecise and uncertain information by automated ... processing systems. To contrast their objectives and characteristics with the sound deductive procedures of classical logic, methodologies developed

  5. Traffic flow forecasting using approximate nearest neighbor nonparametric regression

    DOT National Transportation Integrated Search

    2000-12-01

    The purpose of this research is to enhance nonparametric regression (NPR) for use in real-time systems by first reducing execution time using advanced data structures and imprecise computations and then developing a methodology for applying NPR. Due ...

  6. BioDAQ--a simple biosignal acquisition system for didactic use.

    PubMed

    Csaky, Z; Mihalas, G I; Focsa, M

    2002-01-01

    A simple non expensive device for biosignal acquisition is presented. It mainly meets the requirements for didactic purposes specific in medical informatics laboratory classes. The system has two main types of devices: 'student unit'--the simplest one, used during lessons on real signals and 'demo unit', which can be also used in medical practice or for collecting biological signals. It is able to record: optical pulse, sphygmogram, ECG (1-4 leads) EEG or EMG (1-4 channels). For didactical purposes it has a large scale of recording options: variable sampling rate, gain and filtering. It can also be used in tele-acquisition via Internet.

  7. Front-end electronics and DAQ for the EURITRACK tagged neutron inspection system

    NASA Astrophysics Data System (ADS)

    Lunardon, M.; Bottosso, C.; Fabris, D.; Moretto, S.; Nebbia, G.; Pesente, S.; Viesti, G.; Bigongiari, A.; Colonna, A.; Tintori, C.; Valkovic, V.; Sudac, D.; Peerani, P.; Sequeira, V.; Salvato, M.

    2007-08-01

    The EURopean Illicit TRAfficing Countermeasures Kit (EURITRACK) Front-End and Data Acquisition System is a compact set of VME boards interfaced with a standard PC. The system is part of a cargo container inspection portal based on the tagged neutrons technique. The front-end processes all detector signals and checks coincidences between any of the 64 pixels of the alpha particle detector and any gamma-ray signals in 22 NaI(Tl) scintillators. The system is capable of handling the data flow at neutron flux up to the portal limiting value of 108 neutrons/second. Some typical applications are presented.

  8. The TOTEM T1 read out card motherboard

    NASA Astrophysics Data System (ADS)

    Minutoli, S.; Lo Vetere, M.; Robutti, E.

    2010-12-01

    This article describes the Read Out Card (ROC) motherboard, which is the main component of the T1 forward telescope front-end electronic system. The ROC main objectives are to acquire tracking data and trigger information from the detector. It performs data conversion from electrical to optical format and transfers the data streams to the next level of the system and it implements Slow Control modules which are able to receive, decode and distribute the LHC machine low jitter clock and fast command. The ROC also provides a spy mezzanine connection based on programmable FPGA and USB2.0 for laboratory and portable DAQ debugging system.

  9. PERSPECTIVES ON SETTING SUCCESS CRITERA FOR WETLAND RESTORATION

    EPA Science Inventory

    The task of determining the success of wetland restoration has long been challenging and sometimes contentious because success is an imprecise term that means different things in different situations and to different people. Compliance success is determined by evaluating complian...

  10. Empathy: The Charismatic Chimera.

    ERIC Educational Resources Information Center

    Macarov, David

    1978-01-01

    Three major meanings of "empathy" are discussed with reference to the widespread imprecision in its use. It is suggested that undifferentiated use of the term in social work education is dangerous, particularly in view of the positive valence associated with being empathetic. (Author/BH)

  11. DIGITAL IMAGE ANALYSIS OF ZOSTERA MARINA LEAF INJURY

    EPA Science Inventory

    Current methods for assessing leaf injury in Zostera marina (eelgrass) utilize subjective indexes for desiccation injury and wasting disease. Because of the subjective nature of these measures, they are inherently imprecise making them difficult to use in quantifying complex leaf...

  12. Nuclear Science Symposium, 21st, Scintillation and Semiconductor Counter Symposium, 14th, and Nuclear Power Systems Symposium, 6th, Washington, D.C., December 11-13, 1974, Proceedings

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers are presented dealing with latest advances in the design of scintillation counters, semiconductor radiation detectors, gas and position sensitive radiation detectors, and the application of these detectors in biomedicine, satellite instrumentation, and environmental and reactor instrumentation. Some of the topics covered include entopistic scintillators, neutron spectrometry by diamond detector for nuclear radiation, the spherical drift chamber for X-ray imaging applications, CdTe detectors in radioimmunoassay analysis, CAMAC and NIM systems in the space program, a closed loop threshold calibrator for pulse height discriminators, an oriented graphite X-ray diffraction telescope, design of a continuous digital-output environmental radon monitor, and the optimization of nanosecond fission ion chambers for reactor physics. Individual items are announced in this issue.

  13. User`s manual for the CDC-1 digitizer controller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferron, J.R.

    1994-09-01

    A detailed description of how to use the CDC-1 digitizer controller is given. The CDC-1 is used with the CAMAC format digitizer models in the TRAQ series (manufactured by DSP Technology Inc.), the DAD-1 data acquisition daughter board, and the Intel i860-based SuperCard-2 (manufactured, by CSP Inc.) to form a high speed data acquisition and real time analysis system. Data can be transferred to the memory on the SuperCard-2 at a rate as high as 40 million 14-bit samples per second. Depending on the model of TRAQ digitizer in use, digitizing rates up to 3.33 MHz are supported (with eightmore » data channels), or, for instance, at a sample rate of 100 kHz, 384 data channels can be acquired.« less

  14. Apparatus for controlling the scan width of a scanning laser beam

    DOEpatents

    Johnson, Gary W.

    1996-01-01

    Swept-wavelength lasers are often used in absorption spectroscopy applications. In experiments where high accuracy is required, it is desirable to continuously monitor and control the range of wavelengths scanned (the scan width). A system has been demonstrated whereby the scan width of a swept ring-dye laser, or semiconductor diode laser, can be measured and controlled in real-time with a resolution better than 0.1%. Scan linearity, or conformity to a nonlinear scan waveform, can be measured and controlled. The system of the invention consists of a Fabry-Perot interferometer, three CAMAC interface modules, and a microcomputer running a simple analysis and proportional-integral control algorithm. With additional modules, multiple lasers can be simultaneously controlled. The invention also includes an embodiment implemented on an ordinary PC with a multifunction plug-in board.

  15. Apparatus for controlling the scan width of a scanning laser beam

    DOEpatents

    Johnson, G.W.

    1996-10-22

    Swept-wavelength lasers are often used in absorption spectroscopy applications. In experiments where high accuracy is required, it is desirable to continuously monitor and control the range of wavelengths scanned (the scan width). A system has been demonstrated whereby the scan width of a swept ring-dye laser, or semiconductor diode laser, can be measured and controlled in real-time with a resolution better than 0.1%. Scan linearity, or conformity to a nonlinear scan waveform, can be measured and controlled. The system of the invention consists of a Fabry-Perot interferometer, three CAMAC interface modules, and a microcomputer running a simple analysis and proportional-integral control algorithm. With additional modules, multiple lasers can be simultaneously controlled. The invention also includes an embodiment implemented on an ordinary PC with a multifunction plug-in board. 8 figs.

  16. Digital Low Level RF Systems for Fermilab Main Ring and Tevatron

    NASA Astrophysics Data System (ADS)

    Chase, B.; Barnes, B.; Meisner, K.

    1997-05-01

    At Fermilab, a new Low Level RF system is successfully installed and operating in the Main Ring. Installation is proceeding for a Tevatron system. This upgrade replaces aging CAMAC/NIM components for an increase in accuracy, reliability, and flexibility. These VXI systems are based on a custom three channel direct digital synthesizer(DDS) module. Each synthesizer channel is capable of independent or ganged operation for both frequency and phase modulation. New frequency and phase values are computed at a 100kHz rate on the module's Analog Devices ADSP21062 (SHARC) digital signal processor. The DSP concurrently handles feedforward, feedback, and beam manipulations. Higher level state machines and the control system interface are handled at the crate level using the VxWorks operating system. This paper discusses the hardware, software and operational aspects of these LLRF systems.

  17. The auto-tuned land data assimilation system (ATLAS)

    USDA-ARS?s Scientific Manuscript database

    Land data assimilation systems are tasked with the merging remotely-sensed soil moisture retrievals with information derived from a soil water balance model driven (principally) by observed rainfall. The performance of such systems is frequently degraded by the imprecise specification of parameters ...

  18. Automating Web Collection and Validation of GPS data for Longitudinal Urban Travel Studies

    DOT National Transportation Integrated Search

    2012-08-01

    Traditional paper and phone travel surveys are expensive, time consuming, and have problems of missing trips, illogical trip sequences, and : imprecise travel time. GPS-based travel surveys can avoid many of these problems and are becoming increasing...

  19. Reassessment of the Access Testosterone chemiluminescence assay and comparison with LC-MS method.

    PubMed

    Dittadi, Ruggero; Matteucci, Mara; Meneghetti, Elisa; Ndreu, Rudina

    2018-03-01

    To reassess the imprecision and Limit of Quantitation, to evaluate the cross-reaction with dehydroepiandrosterone-sulfate (DHEAS), the accuracy toward liquid chromatography-mass spectrometry (LC-MS) and the reference interval of the Access Testosterone method, performed by DxI immunoassay platform (Beckman Coulter). Imprecision was evaluated testing six pool samples assayed in 20 different run using two reagents lots. The cross-reaction with DHEAS was studied both by a displacement curve and by spiking DHEAS standard in two serum samples with known amount of testosterone. The comparison with LC-MS was evaluated by Passing-Bablock analysis in 21 routine serum samples and 19 control samples from an External Quality Assurance (EQA) scheme. The reference interval was verified by an indirect estimation on 2445 male and 2838 female outpatients. The imprecision study showed a coefficient of variation (CV) between 2.7% and 34.7% for serum pools from 16.3 and 0.27 nmol/L. The value of Limit of Quantitation at 20% CV was 0.53 nmol/L. The DHEAS showed a cross-reaction of 0.0074%. A comparison with LC-MS showed a trend toward a slight underestimation of immunoassay vs LC-MS (Passing-Bablock equations: DxI=-0.24+0.906 LCMS in serum samples and DxI=-0.299+0.981 LCMS in EQA samples). The verification of reference interval showed a 2.5th-97.5th percentile distribution of 6.6-24.3 nmol/L for male over 14 years and <0.5-2.78 nmol/L for female subjects, in accord with the reference intervals reported by the manufacturer. The Access Testosterone method could be considered an adequately reliable tool for the testosterone measurement. © 2017 Wiley Periodicals, Inc.

  20. Status of internal quality control for thyroid hormones immunoassays from 2011 to 2016 in China.

    PubMed

    Zhang, Shishi; Wang, Wei; Zhao, Haijian; He, Falin; Zhong, Kun; Yuan, Shuai; Wang, Zhiguo

    2018-01-01

    Internal quality control (IQC) plays a key role in the evaluation of precision performance in clinical laboratories. This report aims to present precision status of thyroid hormones immunoassays from 2011 to 2016 in China. Through Clinet-EQA reporting system, IQC information of Triiodothyronine and Thyroxine in the form of free and total (FT3, TT3, FT4, TT4), as well as Thyroid Stimulating Hormone (TSH) were collected from participant laboratories submitting IQC data in February, 2011-2016. For each analyte, current CVs were compared among different years and measurement systems. Percentages of laboratories meeting five allowable imprecision specifications (pass rates) were also calculated. Analysis of IQC practice was conducted to constitute a complete report. Current CVs were decreasing significantly but pass rates increasing only for FT3 during 6 years. FT3, TT3, FT4, and TT4 had the highest pass rates comparing with 1/3TEa imprecision specification but TSH had this comparing with minimum imprecision specification derived from biological variation. Constituent ratios of four mainstream measurement systems changed insignificantly. In 2016, precision performance of Abbott and Roche systems were better than Beckman and Siemens systems for all analytes except FT3 had Siemens also better than Beckman. Analysis of IQC practice demonstrated wide variation and great progress in aspects of IQC rules and control frequency. With change of IQC practice, only FT3 had precision performance improved in 6 years. However, precision status of five analytes in China was still unsatisfying. Ongoing investigation and improvement of IQC have yet to be achieved. © 2017 Wiley Periodicals, Inc.

  1. Biological variation of vitamins in blood of healthy individuals.

    PubMed

    Talwar, Dinesh K; Azharuddin, Mohammed K; Williamson, Cathy; Teoh, Yee Ping; McMillan, Donald C; St J O'Reilly, Denis

    2005-11-01

    Components of biological variation can be used to define objective quality specifications (imprecision, bias, and total error), to assess the usefulness of reference values [index of individuality (II)], and to evaluate significance of changes in serial results from an individual [reference change value (RCV)]. However, biological variation data on vitamins in blood are limited. The aims of the present study were to determine the intra- and interindividual biological variation of vitamins A, E, B(1), B(2), B(6), C, and K and carotenoids in plasma, whole blood, or erythrocytes from apparently healthy persons and to define quality specifications for vitamin measurements based on their biology. Fasting plasma, whole blood, and erythrocytes were collected from 14 healthy volunteers at regular weekly intervals over 22 weeks. Vitamins were measured by HPLC. From the data generated, the intra- (CV(I)) and interindividual (CV(G)) biological CVs were estimated for each vitamin. Derived quality specifications, II, and RCV were calculated from CV(I) and CV(G). CV(I) was 4.8%-38% and CV(G) was 10%-65% for the vitamins measured. The CV(I)s for vitamins A, E, B(1), and B(2) were lower (4.8%-7.6%) than for the other vitamins in blood. For all vitamins, CV(G) was higher than CV(I), with II <1.0 (range, 0.36-0.95). The RCVs for vitamins were high (15.8%-108%). Apart from vitamins A, B(1), and erythrocyte B(2), the imprecision of our methods for measurement of vitamins in blood was within the desirable goal. For most vitamin measurements in plasma, whole blood, or erythrocytes, the desirable imprecision goals based on biological variation are obtainable by current methodologies. Population reference intervals for vitamins are of limited value in demonstrating deficiency or excess.

  2. The microINR portable coagulometer: analytical quality and user-friendliness of a PT (INR) point-of-care instrument.

    PubMed

    Larsen, Pia Bükmann; Storjord, Elin; Bakke, Åsne; Bukve, Tone; Christensen, Mikael; Eikeland, Joakim; Haugen, Vegar Engeland; Husby, Kristin; McGrail, Rie; Mikaelsen, Solveig Meier; Monsen, Grete; Møller, Mette Fogh; Nybo, Jan; Revsholm, Jesper; Risøy, Aslaug Johanne; Skålsvik, Unni Marie; Strand, Heidi; Teruel, Reyes Serrano; Theodorsson, Elvar

    2017-04-01

    Regular measurement of prothrombin time as an international normalized ratio PT (INR) is mandatory for optimal and safe use of warfarin. Scandinavian evaluation of laboratory equipment for primary health care (SKUP) evaluated the microINR portable coagulometer (microINR ® ) (iLine Microsystems S.L., Spain) for measurement of PT (INR). Analytical quality and user-friendliness were evaluated under optimal conditions at an accredited hospital laboratory and at two primary health care centres (PHCCs). Patients were recruited at the outpatient clinic of the Laboratory of Medical Biochemistry, St Olav's University Hospital, Trondheim, Norway (n = 98) and from two PHCCs (n = 88). Venous blood samples were analyzed under optimal conditions on the STA-R ® Evolution with STA-SPA + reagent (Stago, France) (Owren method), and the results were compared to capillary measurements on the microINR ® . The imprecision of the microINR ® was 6% (90% CI: 5.3-7.0%) and 6.3% (90% CI: 5.1-8.3) in the outpatient clinic and PHCC2, respectively for INR ≥2.5. The microINR ® did not meet the SKUP quality requirement for imprecision ≤5.0%. For INR <2.5 at PHCC2 and at both levels in PHCC1, CV% was ≤5.0. The accuracy fulfilled the SKUP quality goal in both outpatient clinic and PHCCs. User-friendliness of the operation manual was rated as intermediate, defined by SKUP as neutral ratings assessed as neither good nor bad. Operation facilities was rated unsatisfactory, and time factors satisfactory. In conclusion, quality requirements for imprecision were not met. The SKUP criteria for accuracy was fulfilled both at the hospital and at the PHCCs. The user-friendliness was rated intermediate.

  3. Establishment and validation of analytical reference panels for the standardization of quantitative BCR-ABL1 measurements on the international scale.

    PubMed

    White, Helen E; Hedges, John; Bendit, Israel; Branford, Susan; Colomer, Dolors; Hochhaus, Andreas; Hughes, Timothy; Kamel-Reid, Suzanne; Kim, Dong-Wook; Modur, Vijay; Müller, Martin C; Pagnano, Katia B; Pane, Fabrizio; Radich, Jerry; Cross, Nicholas C P; Labourier, Emmanuel

    2013-06-01

    Current guidelines for managing Philadelphia-positive chronic myeloid leukemia include monitoring the expression of the BCR-ABL1 (breakpoint cluster region/c-abl oncogene 1, non-receptor tyrosine kinase) fusion gene by quantitative reverse-transcription PCR (RT-qPCR). Our goal was to establish and validate reference panels to mitigate the interlaboratory imprecision of quantitative BCR-ABL1 measurements and to facilitate global standardization on the international scale (IS). Four-level secondary reference panels were manufactured under controlled and validated processes with synthetic Armored RNA Quant molecules (Asuragen) calibrated to reference standards from the WHO and the NIST. Performance was evaluated in IS reference laboratories and with non-IS-standardized RT-qPCR methods. For most methods, percent ratios for BCR-ABL1 e13a2 and e14a2 relative to ABL1 or BCR were robust at 4 different levels and linear over 3 logarithms, from 10% to 0.01% on the IS. The intraassay and interassay imprecision was <2-fold overall. Performance was stable across 3 consecutive lots, in multiple laboratories, and over a period of 18 months to date. International field trials demonstrated the commutability of the reagents and their accurate alignment to the IS within the intra- and interlaboratory imprecision of IS-standardized methods. The synthetic calibrator panels are robust, reproducibly manufactured, analytically calibrated to the WHO primary standards, and compatible with most BCR-ABL1 RT-qPCR assay designs. The broad availability of secondary reference reagents will further facilitate interlaboratory comparative studies and independent quality assessment programs, which are of paramount importance for worldwide standardization of BCR-ABL1 monitoring results and the optimization of current and new therapeutic approaches for chronic myeloid leukemia. © 2013 American Association for Clinical Chemistry.

  4. National continuous surveys on internal quality control for HbA1c in 306 clinical laboratories of China from 2012 to 2016: Continual improvement.

    PubMed

    Li, Tingting; Wang, Wei; Zhao, Haijian; He, Falin; Zhong, Kun; Yuan, Shuai; Wang, Zhiguo

    2017-09-01

    This study aimed to evaluate whether the quality performance of clinical laboratories in China has been greatly improved and whether Internal Quality Control (IQC) practice of HbA1c has also been changed since National Center for Clinical Laboratories (NCCL) of China organized laboratories to report IQC data for HbA1c in 2012. Internal Quality Control information of 306 External Quality Assessment (EQA) participant laboratories which kept reporting IQC data in February from 2012 to 2016 were collected by Web-based EQA system. Then percentages of laboratories meeting four different imprecision specifications for current coefficient of variations (CVs) of HbA1c measurements were calculated. Finally, we comprehensively analyzed analytical systems and IQC practice of HbA1c measurements. The current CVs of HbA1c tests have decreased significantly from 2012 to 2016. And percentages of laboratories meeting four imprecision specifications for CVs all showed the increasing tendency year by year. As for analytical system, 52.1% (159/306) laboratories changed their systems with the change in principle of assay. And many laboratories began to use cation exchange high-performance liquid chromatography (CE-HPLC) instead of Immunoturbidimetry, because CE-HPLC owed a lower intra-laboratory CVs. The data of IQC practice, such as IQC rules and frequency, also showed significant variability among years with overall tendency of meeting requirements. The imprecision performance of HbA1c tests has been improved in these 5 years with the change in IQC practice, but it is still disappointing in China. Therefore, laboratories should actively find existing problems and take action to promote performance of HbA1c measurements. © 2016 Wiley Periodicals, Inc.

  5. Stability of 35 biochemical and immunological routine tests after 10 hours storage and transport of human whole blood at 21°C.

    PubMed

    Henriksen, Linda O; Faber, Nina R; Moller, Mette F; Nexo, Ebba; Hansen, Annebirthe B

    2014-10-01

    Suitable procedures for transport of blood samples from general practitioners to hospital laboratories are requested. Here we explore routine testing on samples stored and transported as whole blood in lithium-heparin or serum tubes. Blood samples were collected from 106 hospitalized patients, and analyzed on Architect c8000 or Advia Centaur XP for 35 analytes at base line, and after storage and transport of whole blood in lithium-heparin or serum tubes at 21 ± 1°C for 10 h. Bias and imprecision (representing variation from analysis and storage) were calculated from values at baseline and after storage, and differences tested by paired t-tests. Results were compared to goals set by the laboratory. We observed no statistically significant bias and results within the goal for imprecision between baseline samples and 10-h samples for albumin, alkaline phosphatase, antitrypsin, bilirubin, creatinine, free triiodothyronine, γ-glutamyl transferase, haptoglobin, immunoglobulin G, lactate dehydrogenase, prostate specific antigen, total carbon dioxide, and urea. Alanine aminotransferase, amylase, C-reactive protein, calcium, cholesterol, creatine kinase, ferritin, free thyroxine, immunoglobulin A, immunoglobulin M, orosomucoid, sodium, transferrin, and triglycerides met goals for imprecision, though they showed a minor, but statistically significant bias in results after storage. Cobalamin, folate, HDL-cholesterol, iron, phosphate, potassium, thyroid stimulating hormone and urate warranted concern, but only folate and phosphate showed deviations of clinical importance. We conclude that whole blood in lithium-heparin or serum tubes stored for 10 h at 21 ± 1°C, may be used for routine analysis without restrictions for all investigated analytes but folate and phosphate.

  6. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    PubMed

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Sketching Curves for Normal Distributions--Geometric Connections

    ERIC Educational Resources Information Center

    Bosse, Michael J.

    2006-01-01

    Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…

  8. Detection and measurement of plant disease symptoms using visible-wavelength photography and image analysis

    USDA-ARS?s Scientific Manuscript database

    Disease assessment is required for many purposes including predicting yield loss, monitoring and forecasting epidemics, judging host resistance, and for studying fundamental biological host-pathogen processes. Inaccurate and/or imprecise assessments can result in incorrect conclusions or actions. Im...

  9. Phonological Awareness Deficits in Developmental Dyslexia and the Phonological Representations Hypothesis.

    ERIC Educational Resources Information Center

    Swan, Denise; Goswami, Usha

    1997-01-01

    Used picture-naming task to identify accurate/inaccurate phonological representations by dyslexic and control children; compared performance on phonological measures for words with precise/imprecise representations. Found that frequency effects in phonological tasks disappeared after considering representational quality, and that availability of…

  10. ELICIT: An alternative imprecise weight elicitation technique for use in multi-criteria decision analysis for healthcare

    PubMed Central

    Diaby, Vakaramoko; Sanogo, Vassiki; Moussa, Kouame Richard

    2015-01-01

    Objective In this paper, the readers are introduced to ELICIT, an imprecise weight elicitation technique for multicriteria decision analysis for healthcare. Methods The application of ELICIT consists of two steps: the rank ordering of evaluation criteria based on decision-makers’ (DMs) preferences using the principal component analysis; and the estimation of criteria weights and their descriptive statistics using the variable interdependent analysis and the Monte Carlo method. The application of ELICIT is illustrated with a hypothetical case study involving the elicitation of weights for five criteria used to select the best device for eye surgery. Results The criteria were ranked from 1–5, based on a strict preference relationship established by the DMs. For each criterion, the deterministic weight was estimated as well as the standard deviation and 95% credibility interval. Conclusions ELICIT is appropriate in situations where only ordinal DMs’ preferences are available to elicit decision criteria weights. PMID:26361235

  11. Modeling error in experimental assays using the bootstrap principle: Understanding discrepancies between assays using different dispensing technologies

    PubMed Central

    Hanson, Sonya M.; Ekins, Sean; Chodera, John D.

    2015-01-01

    All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597

  12. Extrinsic cognitive load impairs low-level speech perception.

    PubMed

    Mattys, Sven L; Barden, Katharine; Samuel, Arthur G

    2014-06-01

    Recent research has suggested that the extrinsic cognitive load generated by performing a nonlinguistic visual task while perceiving speech increases listeners' reliance on lexical knowledge and decreases their capacity to perceive phonetic detail. In the present study, we asked whether this effect is accounted for better at a lexical or a sublexical level. The former would imply that cognitive load directly affects lexical activation but not perceptual sensitivity; the latter would imply that increased lexical reliance under cognitive load is only a secondary consequence of imprecise or incomplete phonetic encoding. Using the phoneme restoration paradigm, we showed that perceptual sensitivity decreases (i.e., phoneme restoration increases) almost linearly with the effort involved in the concurrent visual task. However, cognitive load had only a minimal effect on the contribution of lexical information to phoneme restoration. We concluded that the locus of extrinsic cognitive load on the speech system is perceptual rather than lexical. Mechanisms by which cognitive load increases tolerance to acoustic imprecision and broadens phonemic categories were discussed.

  13. A fuzzy-theory-based behavioral model for studying pedestrian evacuation from a single-exit room

    NASA Astrophysics Data System (ADS)

    Fu, Libi; Song, Weiguo; Lo, Siuming

    2016-08-01

    Many mass events in recent years have highlighted the importance of research on pedestrian evacuation dynamics. A number of models have been developed to analyze crowd behavior under evacuation situations. However, few focus on pedestrians' decision-making with respect to uncertainty, vagueness and imprecision. In this paper, a discrete evacuation model defined on the cellular space is proposed according to the fuzzy theory which is able to describe imprecise and subjective information. Pedestrians' percept information and various characteristics are regarded as fuzzy input. Then fuzzy inference systems with rule bases, which resemble human reasoning, are established to obtain fuzzy output that decides pedestrians' movement direction. This model is tested in two scenarios, namely in a single-exit room with and without obstacles. Simulation results reproduce some classic dynamics phenomena discovered in real building evacuation situations, and are consistent with those in other models and experiments. It is hoped that this study will enrich movement rules and approaches in traditional cellular automaton models for evacuation dynamics.

  14. The lawful imprecision of human surface tilt estimation in natural scenes

    PubMed Central

    2018-01-01

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477

  15. The lawful imprecision of human surface tilt estimation in natural scenes.

    PubMed

    Kim, Seha; Burge, Johannes

    2018-01-31

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.

  16. Electronics for a highly segmented electromagnetic calorimeter prototype

    NASA Astrophysics Data System (ADS)

    Fehlker, D.; Alme, J.; van den Brink, A.; de Haas, A. P.; Nooren, G.-J.; Reicher, M.; Röhrich, D.; Rossewij, M.; Ullaland, K.; Yang, S.

    2013-03-01

    A prototype of a highly segmented electromagnetic calorimeter has been developed. The detector tower is made of 24 layers of PHASE2/MIMOSA23 silicon sensors sandwiched between tungsten plates, with 4 sensors per layer, a total of 96 MIMOSA sensors, resulting in 39 MPixels for the complete prototype detector tower. The paper focuses on the electronics of this calorimeter prototype. Two detector readout and control systems are used, each containing two Spartan 6 and one Virtex 6 FPGA, running embedded Linux, each system serving 12 detector layers. In 550 ms a total of 4 Gbytes of data is read from the detector, stored in memory on the electronics and then shipped to the DAQ system via Gigabit ethernet.

  17. An FPGA-based trigger for the phase II of the MEG experiment

    NASA Astrophysics Data System (ADS)

    Baldini, A.; Bemporad, C.; Cei, F.; Galli, L.; Grassi, M.; Morsani, F.; Nicolò, D.; Ritt, S.; Venturini, M.

    2016-07-01

    For the phase II of MEG, we are going to develop a combined trigger and DAQ system. Here we focus on the former side, which operates an on-line reconstruction of detector signals and event selection within 450 μs from event occurrence. Trigger concentrator boards (TCB) are under development to gather data from different crates, each connected to a set of detector channels, to accomplish higher-level algorithms to issue a trigger in the case of a candidate signal event. We describe the major features of the new system, in comparison with phase I, as well as its performances in terms of selection efficiency and background rejection.

  18. A UNIX SVR4-OS 9 distributed data acquisition for high energy physics

    NASA Astrophysics Data System (ADS)

    Drouhin, F.; Schwaller, B.; Fontaine, J. C.; Charles, F.; Pallares, A.; Huss, D.

    1998-08-01

    The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMS tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The UNIX system manages a list of OS9 front-end systems with a synchronisation protocol running over a TCP/IP layer.

  19. What Is Religiosity?

    ERIC Educational Resources Information Center

    Holdcroft, Barbara

    2006-01-01

    Religiosity is a complex concept and difficult to define for at least two reasons. The first reason is the uncertainty and imprecise nature of the English language. Colloquially, in "Roget's Thesaurus" (Lewis, 1978), religiosity is found to be synonymous with such terms as religiousness, orthodoxy, faith, belief, piousness, devotion, and holiness.…

  20. Cost effective, high performance transient recorder systems, utilizing the latest ADC`s, S/H`s, memories and PLA`s; Final report, May 14, 1987--February 14, 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joerger, F. A.

    1990-02-01

    This project was to develop five transient recorder modules of various speeds and features. Four of the modules; TR200, 200MHZ recorder; TR2/25, Dual 25MHZ recorder; TR1012, 10MHZ, 12 bit recorder, and the ADC3216, 32 channel, 16 bit recorder were developed in the international CAMAC standard. The fifth unit, VTR1, 25MHZ recorder was packaged in the VME standard. Three of the modules, Models TR200, TR2/25 and VTR1 are already in Phase 3. The ADC3216 has been prototyped and successfully evaluated by a number of customers. The last module, Model TR1012, has been completely designed and the artwork completed. This module willmore » undergo tests shortly. 4 figs.« less

  1. MECDAS: A distributed data acquisition system for experiments at MAMI

    NASA Astrophysics Data System (ADS)

    Krygier, K. W.; Merle, K.

    1994-02-01

    For the coincidence experiments with the three spectrometer setup at MAMI an experiment control and data acquisition system has been built and was put successfully into final operation in 1992. MECDAS is designed as a distributed system using communication via Ethernet and optical links. As the front end, VME bus systems are used for real time purposes and direct hardware access via CAMAC, Fastbus or VMEbus. RISC workstations running UNIX are used for monitoring, data archiving and online and offline analysis of the experiment. MECDAS consists of several fixed programs and libraries, but large parts of readout and analysis can be configured by the user. Experiment specific configuration files are used to generate efficient and powerful code well adapted to special problems without additional programming. The experiment description is added to the raw collection of partially analyzed data to get self-descriptive data files.

  2. A modular multiple use system for precise time and frequency measurement and distribution

    NASA Technical Reports Server (NTRS)

    Reinhardt, V. S.; Adams, W. S.; Lee, G. M.; Bush, R. L.

    1978-01-01

    A modular CAMAC based system is described which was developed to meet a variety of precise time and frequency measurement and distribution needs. The system was based on a generalization of the dual mixer concept. By using a 16 channel 100 ns event clock, the system can intercompare the phase of 16 frequency standards with subpicosecond resolution. The system has a noise floor of 26 fs and a long term stability on the order of 1 ps or better. The system also used a digitally controlled crystal oscillator in a control loop to provide an offsettable 5 MHz output with subpicosecond phase tracking capability. A detailed description of the system is given including theory of operation and performance. A method to improve the performance of the dual mixer technique is discussed when phase balancing of the two input ports cannot be accomplished.

  3. A Plasma Diagnostic Set for the Study of a Variable Specific Impulse Magnetoplasma Rocket

    NASA Astrophysics Data System (ADS)

    Squire, J. P.; Chang-Diaz, F. R.; Bengtson Bussell, R., Jr.; Jacobson, V. T.; Wootton, A. J.; Bering, E. A.; Jack, T.; Rabeau, A.

    1997-11-01

    The Advanced Space Propulsion Laboratory (ASPL) is developing a Variable Specific Impulse Magnetoplasma Rocket (VASIMR) using an RF heated magnetic mirror operated asymmetrically. We will describe the initial set of plasma diagnostics and data acquisition system being developed and installed on the VASIMR experiment. A U.T. Austin team is installing two fast reciprocating probes: a quadruple Langmuir and a Mach probe. These measure electron density and temperature profiles, electrostatic plasma fluctuations, and plasma flow profiles. The University of Houston is developing an array of 20 highly directional Retarding Potential Analyzers (RPA) for measuring ion energy distribution function profiles in the rocket plume, giving a measurement of total thrust. We have also developed a CAMAC based data acquisition system using LabView running on a Power Macintosh communicating through a 2 MB/s serial highway. We will present data from initial plasma operations and discuss future diagnostic development.

  4. The School's Democratic Mission and Conflict Resolution: Voices of Swedish Educators

    ERIC Educational Resources Information Center

    Hakvoort, Ilse; Olsson, Elizabeth

    2014-01-01

    Swedish educational policy mandates have given schools a double mission: the development of content-based knowledge as well as the promotion of democratic values and competencies. While detailed learning outcomes are specified for content domains, the democratic mission is imprecisely described and unsupported by practical measures. This leaves…

  5. The Imprecise Art of Roommate Matching: Sorting out Pre-Med Majors, Unicorn Collectors, and "Fluff Chicks."

    ERIC Educational Resources Information Center

    Dodge, Susan

    1989-01-01

    Most large public institutions rely on computers to pair roommates, but officials at Ohio State make the matches after considering students' habits, hobbies, and academic interests. Students' relationships with their roommates frequently determine whether they are happy at the university. (MLW)

  6. (Non)native Speakered: Rethinking (Non)nativeness and Teacher Identity in TESOL Teacher Education

    ERIC Educational Resources Information Center

    Aneja, Geeta A.

    2016-01-01

    Despite its imprecision, the native-nonnative dichotomy has become the dominant paradigm for examining language teacher identity development. The nonnative English speaking teacher (NNEST) movement in particular has considered the impact of deficit framings of nonnativeness on "NNEST" preservice teachers. Although these efforts have…

  7. Automated Water Chemistry Control at University of Virginia Pools.

    ERIC Educational Resources Information Center

    Krone, Dan

    1997-01-01

    Describes the technologically advanced aquatic and fitness center at the University of Virginia. Discusses the imprecise water chemistry control at the former facility and its intensive monitoring requirements. Details the new chemistry control standards initiated in the new center, which ensure constant chlorine and pH levels. (RJM)

  8. Collinearity in Least-Squares Analysis

    ERIC Educational Resources Information Center

    de Levie, Robert

    2012-01-01

    How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…

  9. Be a Cage-Buster

    ERIC Educational Resources Information Center

    Hess, Frederick M.

    2013-01-01

    "A cage-buster can't settle for ambiguity, banalities, or imprecision," writes well-known educator and author Rick Hess. "These things provide dark corners where all manners of ineptitude and excuse-making can hide." Hess suggests that leaders need to clearly define the problems they're trying to solve and open…

  10. Applying An Aptitude-Treatment Interaction Approach to Competency Based Teacher Education.

    ERIC Educational Resources Information Center

    McNergney, Robert

    Aptitude treatment interaction (ATI), as applied to education, measures the interaction of personality factors and experimentally manipulated teaching strategies. ATI research has had dissappointingly inconclusive results so far, but proponents argue that this has been due to imprecise methods, which can be rectified. They believe that…

  11. Excellence and Education: Rhetoric and Reality

    ERIC Educational Resources Information Center

    Gillies, Donald

    2007-01-01

    "Excellence" has been a prevalent term in New Labour rhetoric on education, most notably in the stated goal of "excellence for all" in education. Despite that, the meaning of the term has remained imprecise, and the implications of universal excellence unclear. In this paper, three distinct definitions of excellence are…

  12. A Case for the Use of Conceptual Analysis in Science Education Research

    ERIC Educational Resources Information Center

    Kahn, Sami; Zeidler, Dana L.

    2017-01-01

    Imprecise constructs abound in science education research in part due to reliance on stipulative definitions that neglect fine distinctions between closely related constructs and overlook important meanings and hidden values embedded in language. Lack of conceptual clarity threatens construct validity, hampers theory development, and prevents…

  13. SCRAP TIRE RECYCLING: CONVINCING BUSINESSES TO INTEGRATE INEXPENSIVE, CUTTING-EDGE TECHNOLOGY TO CONVERT TIRES INTO VARIOUS CONSTRUCTION MATERIALS

    EPA Science Inventory

    Scrap tires cause serious environmental pollution and health problems. Although worldwide figures are imprecise, it is known that one-fourth of the 283 million tires scrapped in the United States were landfilled last year. Hundreds of millions more tires ar...

  14. On the Desirability of an Interpretive Science of Organizational Communication.

    ERIC Educational Resources Information Center

    Tompkins, Phillip K.

    Concerned with imprecision in researchers' use of the word, "interpretive," this report draws from the work of Max Weber to describe the characteristics of an interpretive science of organizational communication and then briefly lists some advantages of following the interpretive approach. First examining the role of subjective meaning…

  15. Remote sensing from the desktop up, a students's personal stairway to space (Invited)

    NASA Astrophysics Data System (ADS)

    Church, W.

    2013-12-01

    Doing science with real-time quantitative experiments is becoming more and more affordable and accessible. Because lab equipment is more affordable and accessible, many universities are using lab class models wherein students conduct their experiments in informal settings such as the dorm, outside, or other places throughout the campus. Students are doing real-time measurements homework outside of class. By liberating experiments from facilities, the hope is to give students more experimental science opportunities. The challenge is support. In lab settings, instructors and peers can help students if they have trouble with the steps of assembling their experimental set-up, configuring the data acquisition software, conducting the real-time measurement and doing the analysis. Students working on their own in a dorm do not benefit from this support. Furthermore, when students are given the open ended experimental task of designing their own measurement system, they may need more guidance. In this poster presentation, I will articulate a triangle model to support students through the task of finding the necessary resources to design and build a mission to space. In the triangle model, students have access to base layer concept and skill resources to help them build their experiment. They then have access to middle layer mini-experiments to help them configure and test their experimental set-up. Finally, they have a motivating real-time experiment. As an example of this type of resource used in practice, I will have a balloon science remote sensing project as a stand-in for a balloon mission to 100,000 feet. I will use an Arduino based DAQ system and XBee modules for wireless data transmission to a LabVIEW front-panel. I will attach the DAQ to a tethered balloon to conduct a real-time microclimate experiment in the Moscone Center. Expanded microclimate studies can be the capstone project or can be a stepping-stone to space wherein students prepare a sensor package for a weather balloon launch to 100,000 feet.

  16. Development of Search-Coil Magnetometer for Ultra Low Frequency (ULF) Wave Observations at Jang Bogo Station in Antarctica

    NASA Astrophysics Data System (ADS)

    Lee, J. K.; Shin, J.; Kim, K. H.; Jin, H.; Kim, H.; Kwon, J.; Lee, S.; Jee, G.; Lessard, M.

    2016-12-01

    A ground-based bi-axial search-coil magnetometer (SCM) has been devloped for observation of time-varying magnetic fields (dB/dt) in the Ultra Low Frequency (ULF) range (a few mHz up to 5 Hz) to understand magnetosphere-ionosphere coupling processes. The SCM consists of magnetic sensors, analog electronics, cables and data acquisition system (DAQ). The bi-axial magnetic sensor has coils of wire wound around a mu-metal cores, each of which measures magnetic field pulsations in the horizontal components, geomagnetic north-south and east-west, respectively. The analog electronics is designed to control the cut-off frequency of the instrument and to amplify detected signals. The DAQ has a 16 bit analog to digital converter (ADC) at the user defined rate of 10 Hz. It is also equipped with the Global Positioning System (GPS) and Network Time Protocol (NTP) for time synchronization and accuracy. We have carried out in-lab performance tests (e.g., frequency response, noise level, etc) using a magnetically shielded case and a field-test in a magnetically quiet location in South Korea. During the field test, a ULF Pi 2 event has been observed clearly. We also confirmed that it was a substorm activity from a fluxgate magnetometer data at Mineyama (35°57.3'N, 135°05'E, geographic). The SCM will be installed and operated at Jang Bogo Antarctic Research Station (74°37.4'S, 164°13.7'E, geographic) on Dec. 2016. The geomagnetic latitude of the station is similar to that of the US McMurdo station (77°51'S, 166°40'E, geographic), both of which are typically near the cusp region. Thus, we expect that the SCM can provide useful information to understand ULF wave propagation characteristics.

  17. Development of new data acquisition system for COMPASS experiment

    NASA Astrophysics Data System (ADS)

    Bodlak, M.; Frolov, V.; Jary, V.; Huber, S.; Konorov, I.; Levit, D.; Novy, J.; Salac, R.; Virius, M.

    2016-04-01

    This paper presents development and recent status of the new data acquisiton system of the COMPASS experiment at CERN with up to 50 kHz trigger rate and 36 kB average event size during 10 second period with beam followed by approximately 40 second period without beam. In the original DAQ, the event building is performed by software deployed on switched computer network, moreover the data readout is based on deprecated PCI technology; the new system replaces the event building network with a custom FPGA-based hardware. The custom cards are introduced and advantages of the FPGA technology for DAQ related tasks are discussed. In this paper, we focus on the software part that is mainly responsible for control and monitoring. The most of the system can run as slow control; only readout process has realtime requirements. The design of the software is built on state machines that are implemented using the Qt framework; communication between remote nodes that form the software architecture is based on the DIM library and IPBus technology. Furthermore, PHP and JS languages are used to maintain system configuration; the MySQL database was selected as storage for both configuration of the system and system messages. The system has been design with maximum throughput of 1500 MB/s and large buffering ability used to spread load on readout computers over longer period of time. Great emphasis is put on data latency, data consistency, and even timing checks which are done at each stage of event assembly. System collects results of these checks which together with special data format allows the software to localize origin of problems in data transmission process. A prototype version of the system has already been developed and tested the new system fulfills all given requirements. It is expected that the full-scale version of the system will be finalized in June 2014 and deployed on September provided that tests with cosmic run succeed.

  18. MANTA--an open-source, high density electrophysiology recording suite for MATLAB.

    PubMed

    Englitz, B; David, S V; Sorenson, M D; Shamma, S A

    2013-01-01

    The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point.

  19. Combined Cycle Engine Large-Scale Inlet for Mode Transition Experiments: System Identification Rack Hardware Design

    NASA Technical Reports Server (NTRS)

    Thomas, Randy; Stueber, Thomas J.

    2013-01-01

    The System Identification (SysID) Rack is a real-time hardware-in-the-loop data acquisition (DAQ) and control instrument rack that was designed and built to support inlet testing in the NASA Glenn Research Center 10- by 10-Foot Supersonic Wind Tunnel. This instrument rack is used to support experiments on the Combined-Cycle Engine Large-Scale Inlet for Mode Transition Experiment (CCE? LIMX). The CCE?LIMX is a testbed for an integrated dual flow-path inlet configuration with the two flow paths in an over-and-under arrangement such that the high-speed flow path is located below the lowspeed flow path. The CCE?LIMX includes multiple actuators that are designed to redirect airflow from one flow path to the other; this action is referred to as "inlet mode transition." Multiple phases of experiments have been planned to support research that investigates inlet mode transition: inlet characterization (Phase-1) and system identification (Phase-2). The SysID Rack hardware design met the following requirements to support Phase-1 and Phase-2 experiments: safely and effectively move multiple actuators individually or synchronously; sample and save effector control and position sensor feedback signals; automate control of actuator positioning based on a mode transition schedule; sample and save pressure sensor signals; and perform DAQ and control processes operating at 2.5 KHz. This document describes the hardware components used to build the SysID Rack including their function, specifications, and system interface. Furthermore, provided in this document are a SysID Rack effectors signal list (signal flow); system identification experiment setup; illustrations indicating a typical SysID Rack experiment; and a SysID Rack performance overview for Phase-1 and Phase-2 experiments. The SysID Rack described in this document was a useful tool to meet the project objectives.

  20. The Austrian radiation monitoring network ARAD - best practice and added value

    NASA Astrophysics Data System (ADS)

    Olefs, Marc; Baumgartner, Dietmar; Obleitner, Friedrich; Bichler, Christoph; Foelsche, Ulrich; Pietsch, Helga; Rieder, Harald; Weihs, Philipp; Geyer, Florian; Haiden, Thomas; Schöner, Wolfgang

    2016-04-01

    The Austrian RADiation monitoring network (ARAD) has been established to advance the national climate monitoring and to support satellite retrieval, atmospheric modelling and solar energy techniques development. Measurements cover the downwelling solar and thermal infrared radiation using instruments according to Baseline Surface Radiation Network (BSRN) standards. A unique feature of ARAD is its vertical dimension of five stations, covering an air column between about 200 m a.s.l. (Vienna) and 3100 m a.s.l. (BSRN site Sonnblick). The contribution outlines the aims and scopes of ARAD, its measurement and calibration standards, methods, strategies and station locations. ARAD network operation uses innovative data processing for quality assurance and quality control, applying manual and automated control algorithms. A combined uncertainty estimate for the broadband shortwave radiation fluxes at all five ARAD stations indicates that accuracies range from 1.5 to 23 %. If a directional response error of the pyranometers and the temperature response of the instruments and the data acquisition system (DAQ) is corrected, this expanded uncertainty reduces to 1.4 to 5.2 %. Thus, for large signals (global: 1000 W m-2, diffuse: 500 W m-2) BSRN target accuracies are met or closely met for 70 % of valid measurements at the ARAD stations after this correction. For small signals (50 W m-2), the targets are not achieved as a result of uncertainties associated with the DAQ or the instrument sensitivities. Additional accuracy gains can be achieved in future by additional measurements and corrections. However, for the measurement of direct solar radiation improved instrument accuracy is needed. ARAD could serve as a powerful example for establishing state-of-the-art radiation monitoring at the national level with a multiple-purpose approach. Instrumentation, guidelines and tools (such as the data quality control) developed within ARAD are best practices which could be adopted in other regions, thus saving high development costs.

  1. The Austrian radiation monitoring network ARAD - best practice and added value

    NASA Astrophysics Data System (ADS)

    Olefs, M.; Baumgartner, D. J.; Obleitner, F.; Bichler, C.; Foelsche, U.; Pietsch, H.; Rieder, H. E.; Weihs, P.; Geyer, F.; Haiden, T.; Schöner, W.

    2015-10-01

    The Austrian RADiation monitoring network (ARAD) has been established to advance the national climate monitoring and to support satellite retrieval, atmospheric modelling and solar energy techniques development. Measurements cover the downwelling solar and thermal infrared radiation using instruments according to Baseline Surface Radiation Network (BSRN) standards. A unique feature of ARAD is its vertical dimension of five stations, covering an air column between about 200 m a.s.l. (Vienna) and 3100 m a.s.l. (BSRN site Sonnblick). The paper outlines the aims and scopes of ARAD, its measurement and calibration standards, methods, strategies and station locations. ARAD network operation uses innovative data processing for quality assurance and quality control, applying manual and automated control algorithms. A combined uncertainty estimate for the broadband shortwave radiation fluxes at all five ARAD stations indicates that accuracies range from 1.5 to 23 %. If a directional response error of the pyranometers and the temperature response of the instruments and the data acquisition system (DAQ) is corrected, this expanded uncertainty reduces to 1.4 to 5.2 %. Thus, for large signals (global: 1000 W m-2, diffuse: 500 W m-2) BSRN target accuracies are met or closely met for 70 % of valid measurements at the ARAD stations after this correction. For small signals (50 W m-2), the targets are not achieved as a result of uncertainties associated with the DAQ or the instrument sensitivities. Additional accuracy gains can be achieved in future by additional measurements and corrections. However, for the measurement of direct solar radiation improved instrument accuracy is needed. ARAD could serve as a powerful example for establishing state-of-the-art radiation monitoring at the national level with a multiple-purpose approach. Instrumentation, guidelines and tools (such as the data quality control) developed within ARAD are best practices which could be adopted in other regions, thus saving high development costs.

  2. The DAQ needle in the big-data haystack

    NASA Astrophysics Data System (ADS)

    Meschi, E.

    2015-12-01

    In the last three decades, HEP experiments have faced the challenge of manipulating larger and larger masses of data from increasingly complex, heterogeneous detectors with millions and then tens of millions of electronic channels. LHC experiments abandoned the monolithic architectures of the nineties in favor of a distributed approach, leveraging the appearence of high speed switched networks developed for digital telecommunication and the internet, and the corresponding increase of memory bandwidth available in off-the-shelf consumer equipment. This led to a generation of experiments where custom electronics triggers, analysing coarser-granularity “fast” data, are confined to the first phase of selection, where predictable latency and real time processing for a modest initial rate reduction are “a necessary evil”. Ever more sophisticated algorithms are projected for use in HL- LHC upgrades, using tracker data in the low-level selection in high multiplicity environments, and requiring extremely complex data interconnects. These systems are quickly obsolete and inflexible but must nonetheless survive and be maintained across the extremely long life span of current detectors. New high-bandwidth bidirectional links could make high-speed low-power full readout at the crossing rate a possibility already in the next decade. At the same time, massively parallel and distributed analysis of unstructured data produced by loosely connected, “intelligent” sources has become ubiquitous in commercial applications, while the mass of persistent data produced by e.g. the LHC experiments has made multiple pass, systematic, end-to-end offline processing increasingly burdensome. A possible evolution of DAQ and trigger architectures could lead to detectors with extremely deep asynchronous or even virtual pipelines, where data streams from the various detector channels are analysed and indexed in situ quasi-real-time using intelligent, pattern-driven data organization, and the final selection is operated as a distributed “search for interesting event parts”. A holistic approach is required to study the potential impact of these different developments on the design of detector readout, trigger and data acquisition systems in the next decades.

  3. MANTA—an open-source, high density electrophysiology recording suite for MATLAB

    PubMed Central

    Englitz, B.; David, S. V.; Sorenson, M. D.; Shamma, S. A.

    2013-01-01

    The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point. PMID:23653593

  4. FPGA-based voltage and current dual drive system for high frame rate electrical impedance tomography.

    PubMed

    Khan, Shadab; Manwaring, Preston; Borsic, Andrea; Halter, Ryan

    2015-04-01

    Electrical impedance tomography (EIT) is used to image the electrical property distribution of a tissue under test. An EIT system comprises complex hardware and software modules, which are typically designed for a specific application. Upgrading these modules is a time-consuming process, and requires rigorous testing to ensure proper functioning of new modules with the existing ones. To this end, we developed a modular and reconfigurable data acquisition (DAQ) system using National Instruments' (NI) hardware and software modules, which offer inherent compatibility over generations of hardware and software revisions. The system can be configured to use up to 32-channels. This EIT system can be used to interchangeably apply current or voltage signal, and measure the tissue response in a semi-parallel fashion. A novel signal averaging algorithm, and 512-point fast Fourier transform (FFT) computation block was implemented on the FPGA. FFT output bins were classified as signal or noise. Signal bins constitute a tissue's response to a pure or mixed tone signal. Signal bins' data can be used for traditional applications, as well as synchronous frequency-difference imaging. Noise bins were used to compute noise power on the FPGA. Noise power represents a metric of signal quality, and can be used to ensure proper tissue-electrode contact. Allocation of these computationally expensive tasks to the FPGA reduced the required bandwidth between PC, and the FPGA for high frame rate EIT. In 16-channel configuration, with a signal-averaging factor of 8, the DAQ frame rate at 100 kHz exceeded 110 frames s (-1), and signal-to-noise ratio exceeded 90 dB across the spectrum. Reciprocity error was found to be for frequencies up to 1 MHz. Static imaging experiments were performed on a high-conductivity inclusion placed in a saline filled tank; the inclusion was clearly localized in the reconstructions obtained for both absolute current and voltage mode data.

  5. WITHDRAWN: Transcutaneous electrical stimulation (TES) for treatment of constipation in children.

    PubMed

    Ng, Ruey Terng; Lee, Way Seah; Ang, Hak Lee; Teo, Kai Ming; Yik, Yee Ian; Lai, Nai Ming

    2016-10-12

    Childhood constipation is a common problem with substantial health, economic and emotional burdens. Existing therapeutic options, mainly pharmacological, are not consistently effective, and some are associated with adverse effects after prolonged use. Transcutaneous electrical stimulation (TES), a non-pharmacological approach, is postulated to facilitate bowel movement by modulating the nerves of the large bowel via the application of electrical current transmitted through the abdominal wall. Our main objective was to evaluate the effectiveness and safety of TES when employed to improve bowel function and constipation-related symptoms in children with constipation. We searched MEDLINE (PubMed) (1950 to July 2015), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, Issue 7, 2015), EMBASE (1980 to July 2015), the Cochrane IBD Group Specialized Register, trial registries and conference proceedings to identify applicable studies . Randomized controlled trials that assessed any type of TES, administered at home or in a clinical setting, compared to no treatment, a sham TES, other forms of nerve stimulation or any other pharmaceutical or non-pharmaceutical measures used to treat constipation in children were considered for inclusion. Two authors independently assessed studies for inclusion, extracted data and assessed risk of bias of the included studies. We calculated the risk ratio (RR) and corresponding 95% confidence interval (CI) for categorical outcomes data and the mean difference (MD) and corresponding 95% CI for continuous outcomes. One study from Australia including 46 children aged 8 to 18 years was eligible for inclusion. There were multiple reports identified, including one unpublished report, that focused on different outcomes of the same study. The study had unclear risk of selection bias, high risks of performance, detection and attrition biases, and low risks of reporting biases.There were no significant differences between TES and the sham control group for the following outcomes: i).number of children with > 3 complete spontaneous bowel movements (CSBM) per week (RR 1.07, 95% CI 0.74 to 1.53, one study, 42 participants) (Quality of evidence: very low, due to high risk of bias and serious imprecision ), ii). number of children with improved colonic transit assessed radiologically (RR 5.00, 95% CI 0.79 to 31.63; one study, 21 participants) (Quality of evidence: very low, due to high risk of bias, serious imprecision and indirectness of the outcome). However, mean colonic transit rate, measured as the position of the geometric centre of the radioactive substance ingested along the intestinal tract, was significantly higher in children who received TES compared to sham (MD 1.05, 95% CI 0.36 to 1.74; one study, 30 participants) (Quality of evidence: very low, due to high risk of bias , serious imprecision and indirectness of the outcome). There was no significant difference between the two groups in the number of children with improved soiling-related symptoms (RR 2.08, 95% CI 0.86 to 5.00; one study, 25 participants) (Quality of evidence: very low, due to high risk of bias and serious imprecision). There was no significant difference in the number of children with improved quality of life (QoL) (RR 4.00, 95% CI 0.56 to 28.40; one study, 16 participants) (Quality of evidence: very low, due to high risk of bias issues and serious imprecision ). There were also no significant differences in in self-perceived (MD 5.00, 95% CI -1.21 to 11.21) or parent-perceived QoL (MD -0.20, 95% CI -7.57 to 7.17, one study, 33 participants for both outcomes) (Quality of evidence for both outcomes: very low, due to high risk of bias and serious imprecision). No adverse effects were reported in the included study. The results for the outcomes assessed in this review are uncertain. Thus no firm conclusions regarding the efficacy and safety of TES in children with chronic constipation can be drawn. Further randomized controlled trials assessing TES for the management of childhood constipation should be conducted. Future trials should include clear documentation of methodologies, especially measures to evaluate the effectiveness of blinding, and incorporate patient-important outcomes such as the number of patients with improved CSBM, improved clinical symptoms and quality of life.

  6. Transcutaneous electrical stimulation (TES) for treatment of constipation in children.

    PubMed

    Ng, Ruey Terng; Lee, Way Seah; Ang, Hak Lee; Teo, Kai Ming; Yik, Yee Ian; Lai, Nai Ming

    2016-07-05

    Childhood constipation is a common problem with substantial health, economic and emotional burdens. Existing therapeutic options, mainly pharmacological, are not consistently effective, and some are associated with adverse effects after prolonged use. Transcutaneous electrical stimulation (TES), a non-pharmacological approach, is postulated to facilitate bowel movement by modulating the nerves of the large bowel via the application of electrical current transmitted through the abdominal wall. Our main objective was to evaluate the effectiveness and safety of TES when employed to improve bowel function and constipation-related symptoms in children with constipation. We searched MEDLINE (PubMed) (1950 to July 2015), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, Issue 7, 2015), EMBASE (1980 to July 2015), the Cochrane IBD Group Specialized Register, trial registries and conference proceedings to identify applicable studies . Randomized controlled trials that assessed any type of TES, administered at home or in a clinical setting, compared to no treatment, a sham TES, other forms of nerve stimulation or any other pharmaceutical or non-pharmaceutical measures used to treat constipation in children were considered for inclusion. Two authors independently assessed studies for inclusion, extracted data and assessed risk of bias of the included studies. We calculated the risk ratio (RR) and corresponding 95% confidence interval (CI) for categorical outcomes data and the mean difference (MD) and corresponding 95% CI for continuous outcomes. One study from Australia including 46 children aged 8 to 18 years was eligible for inclusion. There were multiple reports identified, including one unpublished report, that focused on different outcomes of the same study. The study had unclear risk of selection bias, high risks of performance, detection and attrition biases, and low risks of reporting biases.There were no significant differences between TES and the sham control group for the following outcomes: i).number of children with > 3 complete spontaneous bowel movements (CSBM) per week (RR 1.07, 95% CI 0.74 to 1.53, one study, 42 participants) ( very low, due to high risk of bias and serious imprecision ), ii). number of children with improved colonic transit assessed radiologically (RR 5.00, 95% CI 0.79 to 31.63; one study, 21 participants) ( very low, due to high risk of bias, serious imprecision and indirectness of the outcome). However, mean colonic transit rate, measured as the position of the geometric centre of the radioactive substance ingested along the intestinal tract, was significantly higher in children who received TES compared to sham (MD 1.05, 95% CI 0.36 to 1.74; one study, 30 participants) ( very low, due to high risk of bias , serious imprecision and indirectness of the outcome). There was no significant difference between the two groups in the number of children with improved soiling-related symptoms (RR 2.08, 95% CI 0.86 to 5.00; one study, 25 participants) ( very low, due to high risk of bias and serious imprecision). There was no significant difference in the number of children with improved quality of life (QoL) (RR 4.00, 95% CI 0.56 to 28.40; one study, 16 participants) ( very low, due to high risk of bias issues and serious imprecision ). There were also no significant differences in in self-perceived (MD 5.00, 95% CI -1.21 to 11.21) or parent-perceived QoL (MD -0.20, 95% CI -7.57 to 7.17, one study, 33 participants for both outcomes) (QUALITY OF EVIDENCE for both outcomes: very low, due to high risk of bias and serious imprecision). No adverse effects were reported in the included study. The very low quality evidence gathered in this review does not suggest that TES provides a benefit for children with chronic constipation. Further randomized controlled trials assessing TES for the management of childhood constipation should be conducted. Future trials should include clear documentation of methodologies, especially measures to evaluate the effectiveness of blinding, and incorporate patient-important outcomes such as the number of patients with improved CSBM, improved clinical symptoms and quality of life.

  7. Design, modeling, simulation and evaluation of a distributed energy system

    NASA Astrophysics Data System (ADS)

    Cultura, Ambrosio B., II

    This dissertation presents the design, modeling, simulation and evaluation of distributed energy resources (DER) consisting of photovoltaics (PV), wind turbines, batteries, a PEM fuel cell and supercapacitors. The distributed energy resources installed at UMass Lowell consist of the following: 2.5kW PV, 44kWhr lead acid batteries and 1500W, 500W & 300W wind turbines, which were installed before year 2000. Recently added to that are the following: 10.56 kW PV array, 2.4 kW wind turbine, 29 kWhr Lead acid batteries, a 1.2 kW PEM fuel cell and 4-140F supercapacitors. Each newly added energy resource has been designed, modeled, simulated and evaluated before its integration into the existing PV/Wind grid-connected system. The Mathematical and Simulink model of each system was derived and validated by comparing the simulated and experimental results. The Simulated results of energy generated from a 10.56kW PV system are in good agreement with the experimental results. A detailed electrical model of a 2.4kW wind turbine system equipped with a permanent magnet generator, diode rectifier, boost converter and inverter is presented. The analysis of the results demonstrates the effectiveness of the constructed simulink model, and can be used to predict the performance of the wind turbine. It was observed that a PEM fuel cell has a very fast response to load changes. Moreover, the model has validated the actual operation of the PEM fuel cell, showing that the simulated results in Matlab Simulink are consistent with the experimental results. The equivalent mathematical equation, derived from an electrical model of the supercapacitor, is used to simulate its voltage response. The model is completely capable of simulating its voltage behavior, and can predict the charge time and discharge time of voltages on the supercapacitor. The bi-directional dc-dc converter was designed in order to connect the 48V battery bank storage to the 24V battery bank storage. This connection was needed in order to increase the reliability of the DER system. Furthermore, the new computer-based Data Acquisition (DAQ) system for the DER has been designed and installed. The DAQ system is an important component in PC-based measurement, which is used in acquiring and storing data. The design and installation of signal conditioning improve the accuracy, effectiveness and safety of measurements, because of capabilities such as amplifications, isolation, and filtering. A Labview program was the software used to interface and communicate between the DAQ devices and the personal computer. The overall simulink model of the DER system is presented in the last chapter. The simulink model is discussed, and the discussion explains the operation of a grid connected DER. This model can be used as the basis or future reference for designs and installations of DER projects. This model can also be used in converting the DER grid connected system into a Smart Grid system, and that will be the next potential research work to explore.

  8. The Tunka-Grande experiment

    NASA Astrophysics Data System (ADS)

    Monkhoev, R. D.; Budnev, N. M.; Chiavassa, A.; Dyachok, A. N.; Gafarov, A. R.; Gress, O. A.; Gress, T. I.; Grishin, O. G.; Ivanova, A. L.; Kalmykov, N. N.; Kazarina, Yu. A.; Korosteleva, E. E.; Kozhin, V. A.; Kuzmichev, L. A.; Lenok, V. V.; Lubsandorzhiev, B. K.; Lubsandorzhiev, N. B.; Mirgazov, R. R.; Mirzoyan, R.; Osipova, E. A.; Pakhorukov, A. L.; Panasyuk, M. I.; Pankov, L. V.; Poleschuk, V. A.; Popova, E. G.; Postnikov, E. B.; Prosin, V. V.; Ptuskin, V. S.; Pushnin, A. A.; Samoliga, V. S.; Semeney, Y. A.; Sveshnikova, L. G.; Silaev, A. A.; Silaev, A. A., Jr.; Skurikhin, A. V.; Sulakov, V. P.; Tabolenko, V. A.; Voronin, D. M.; Fedorov, O. L.; Spiering, C.; Zagorodnikov, A. V.; Zhurov, D. P.; Zurbanov, V. L.

    2017-06-01

    The investigation of energy spectrum and mass composition of primary cosmic rays in the energy range 1016-1018 eV and the search for diffuse cosmic gamma rays are of the great interest for understanding mechanisms and nature of high-energy particle sources, the problem of great importance in modern astrophysics. Tunka-Grande scintillator array is a part of the experimental complex TAIGA (Tunka Advanced Instrument for Cosmic Ray and Gamma Astronomy) which is located in the Tunka Valley, about 50 km from Lake Baikal. The purpose of this array is the study of diffuse gamma rays and cosmic rays of ultra-high energies by detecting extensive air showers. We describe the design, specifications of the read-out, data acquisition (DAQ) and control systems of the array.

  9. Automated Liquid-Level Control of a Nutrient Reservoir for a Hydroponic System

    NASA Technical Reports Server (NTRS)

    Smith, Boris; Asumadu, Johnson A.; Dogan, Numan S.

    1997-01-01

    A microprocessor-based system for control of the liquid level of a nutrient reservoir for a plant hydroponic growing system has been developed. The system uses an ultrasonic transducer to sense the liquid level or height. A National Instruments' Multifunction Analog and Digital Input/Output PC Kit includes NI-DAQ DOS/Windows driver software for an IBM 486 personal computer. A Labview Full Development system for Windows is the graphical programming system being used. The system allows liquid level control to within 0.1 cm for all levels tried between 8 and 36 cm in the hydroponic system application. The detailed algorithms have been developed and a fully automated microprocessor based nutrient replenishment system has been described for this hydroponic system.

  10. Ethernet based data logger for gaseous detectors

    NASA Astrophysics Data System (ADS)

    Swain, S.; Sahu, P. K.; Sahu, S. K.

    2018-05-01

    A data logger is designed to monitor and record ambient parameters such as temperature, pressure and relative humidity along with gas flow rate as a function of time. These parameters are required for understanding the characteristics of gas-filled detectors such as Gas Electron Multiplier (GEM) and Multi-Wire Proportional Counter (MWPC). The data logger has different microcontrollers and has been interfaced to an ethernet port with a local LCD unit for displaying all measured parameters. In this article, the explanation of the data logger design, hardware, and software description of the master microcontroller and the DAQ system along with LabVIEW interface client program have been presented. We have implemented this device with GEM detector and displayed few preliminary results as a function of above parameters.

  11. A Unix SVR-4-OS9 distributed data acquisition for high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drouhin, F.; Schwaller, B.; Fontaine, J.C.

    1998-08-01

    The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMs tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The Unix system manages a list of OS9 front-end systems with a synchronization protocolmore » running over a TCP/IP layer.« less

  12. The NO$$\

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zalesak, Jaroslav; et al.

    2014-01-01

    The NOνA experiment is a long-baseline neutrino experiment designed to make measurements to determine the neutrino mass hierarchy, neutrino mixing parameters and CP violation in the neutrino sector. In order to make these measurements the NOνA collaboration has designed a highly distributed, synchronized, continuous digitization and readout system that is able to acquire and correlate data from the Fermilab accelerator complex (NuMI), the NOνA near detector at the Fermilab site and the NOνA far detector which is located 810 km away at Ash River, MN. This system has unique properties that let it fully exploit the physics capabilities of themore » NOνA detector. The design of the NOνA DAQ system and its capabilities are discussed in this paper.« less

  13. The new Wide-band Solar Neutrino Trigger for Super-Kamiokande

    NASA Astrophysics Data System (ADS)

    Carminati, Giada

    Super-Kamiokande observes low energy electrons induced by the elastic scattering of 8B solar neutrinos. The transition region between vacuum and matter oscillations, with neutrino energy near 3 MeV, is still partially unexplored by any detector. Super-Kamiokande can study this intermediate regime adding a new software trigger. The Wide-band Intelligent Trigger (WIT) has been developed to simultaneously trigger and reconstruct very low energy electrons (above 2.49 kinetic MeV) with an e_ciency close to 100%. The WIT system, comprising 256-Hyperthreaded CPU cores and one 10-Gigabit Ethernet network switch, has been recently installed and integrated in the online DAQ system of SK and the complete system is currently in an advanced status of online data testing.

  14. Modeling Choice Under Uncertainty in Military Systems Analysis

    DTIC Science & Technology

    1991-11-01

    operators rather than fuzzy operators. This is suggested for further research. 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) In AHP , objectives, functions and...14 4.1 IMPRECISELY SPECIFIED MULTIPLE A’ITRIBUTE UTILITY THEORY... 14 4.2 FUZZY DECISION ANALYSIS...14 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) ................................... 14 4.4 SUBJECTIVE TRANSFER FUNCTION APPROACH

  15. Toward a More Perfect Psychology: Improving Trust, Accuracy, and Transparency in Research

    ERIC Educational Resources Information Center

    Makel, Matthew C., Ed.; Plucker, Jonathan A., Ed.

    2017-01-01

    The promise of science is that its methods cause others to believe its results. This foundation is served by trust, accuracy, and transparency. Unfortunately, current research practices in psychology are known to often produce inaccurate, irreproducible, and imprecise results. "Toward a More Perfect Psychology" introduces a plethora of…

  16. The Short- and Long-Run Marginal Cost Curve: A Pedagogical Note.

    ERIC Educational Resources Information Center

    Sexton, Robert L.; And Others

    1993-01-01

    Contends that the standard description of the relationship between the long-run marginal cost curve and the short-run marginal cost curve is often misleading and imprecise. Asserts that a sampling of college-level textbooks confirms this confusion. Provides a definition and instructional strategy that can be used to promote student understanding…

  17. Improving School Accountability Measures. NBER Working Paper Series.

    ERIC Educational Resources Information Center

    Kane, Thomas J.; Staiger, Douglas O.

    A growing number of states are using annual school-level test scores as part of their school accountability systems. This paper highlights an under-appreciated weakness of that approach, the imprecision of school-level test score means, and proposes a method for discerning signal from noise in annual school report cards. Using methods developed in…

  18. Using Paper Helicopters to Teach Statistical Process Control

    ERIC Educational Resources Information Center

    Johnson, Danny J.

    2011-01-01

    This hands-on project uses a paper helicopter to teach students how to distinguish between common and special causes of variability when developing and using statistical process control charts. It allows the student to experience a process that is out-of-control due to imprecise or incomplete product design specifications and to discover how the…

  19. Behavior of R-Square for Pooled Data Sets.

    ERIC Educational Resources Information Center

    Adams, Arthur J.; Shiffler, Ronald E.

    1989-01-01

    New methods of analysis--equations and graphs for iso-r(sup 2) contours--were introduced and used to illustrate location effects for pooled data sets. The "r(sup 2)" is the coefficient of determination. Results are used to highlight imprecise statements in the literature about the behavior of the correlation coefficient for pooled data…

  20. Development and Utilization of Regional Oceanic Modeling System (ROMS) & Delicacy, Imprecision, and Uncertainty of Oceanic Simulations: An Investigation with ROMS

    DTIC Science & Technology

    2010-09-30

    and Ecosystems: An important community use for ROMS is biogeochemisty: chemical cycles, water quality, blooms , micro-nutrients, larval dispersal... Chile current system. J. Climate, submitted. Colas, F., X. Capet, and J. McWilliams, 2010b: Mesoscale eddy buoyancy flux and eddy-induced

  1. Delicacy, Imprecision, and Uncertainty of Oceanic Simulations: An Investigation with the Regional Oceanic Modeling System (ROMS)

    DTIC Science & Technology

    2013-09-30

    Geochemistry and Ecosystems: An important community use for ROMS is biogeochemisty: chemical cycles, water quality, blooms , micro-nutrients, larval...Sci., submitted. Colas, F., J.C. McWilliams, X. Capet, and J. Kurian, 2012: Heat balance and eddies in the Peru- Chile Current System. Climate

  2. Supporting Clear and Concise Mathematics Language: Say This, Not That

    ERIC Educational Resources Information Center

    Hughes, Elizabeth M.; Powell, Sarah R.; Stevens, Elizabeth A.

    2016-01-01

    One influence contributing to this trend may be the imprecise use of mathematics language. Educators may not interpret mathematics as a second (or third) language for children, when, in fact, all children are mathematical-language learners (Barrow, 2014). The numerals, symbols, and terms that explain mathematics concepts and procedures are…

  3. Research Review: Crossing Syndrome Boundaries in the Search for Brain Endophenotypes

    ERIC Educational Resources Information Center

    Levy, Yonata; Ebstein, Richard P.

    2009-01-01

    The inherent imprecision of behavioral phenotyping is the single most important factor contributing to the failure to discover the biological factors that are involved in psychiatric and neurodevelopmental disorders (e.g., Bearden & Freimer, 2006). In this review article we argue that in addition to an appreciation of the inherent complexity at…

  4. A ligand predication tool based on modeling and reasoning with imprecise probabilistic knowledge.

    PubMed

    Liu, Weiru; Yue, Anbu; Timson, David J

    2010-04-01

    Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool. 2009 Elsevier Ireland Ltd. All rights reserved.

  5. Sleep deprivation impairs precision of waggle dance signaling in honey bees

    PubMed Central

    Klein, Barrett A.; Klein, Arno; Wray, Margaret K.; Mueller, Ulrich G.; Seeley, Thomas D.

    2010-01-01

    Sleep is essential for basic survival, and insufficient sleep leads to a variety of dysfunctions. In humans, one of the most profound consequences of sleep deprivation is imprecise or irrational communication, demonstrated by degradation in signaling as well as in receiving information. Communication in nonhuman animals may suffer analogous degradation of precision, perhaps with especially damaging consequences for social animals. However, society-specific consequences of sleep loss have rarely been explored, and no function of sleep has been ascribed to a truly social (eusocial) organism in the context of its society. Here we show that sleep-deprived honey bees (Apis mellifera) exhibit reduced precision when signaling direction information to food sources in their waggle dances. The deterioration of the honey bee's ability to communicate is expected to reduce the foraging efficiency of nestmates. This study demonstrates the impact of sleep deprivation on signaling in a eusocial animal. If the deterioration of signals made by sleep-deprived honey bees and humans is generalizable, then imprecise communication may be one detrimental effect of sleep loss shared by social organisms. PMID:21156830

  6. [Domestic and international trends concerning allowable limits of error in external quality assessment scheme].

    PubMed

    Hosogaya, Shigemi; Ozaki, Yukio

    2005-06-01

    Many external quality assessment schemes (EQAS) are performed to support quality improvement of the services provided by participating laboratories for the benefits of patients. The EQAS organizer shall be responsible for ensuring that the method of evaluation is appropriate for maintenance of the credibility of the schemes. Procedures to evaluate each participating laboratory are gradually being standardized. In most cases of EQAS, the peer group mean is used as a target of accuracy, and the peer group standard deviation is used as a criterion for inter-laboratory variation. On the other hand, Fraser CG, et al. proposed desirable quality specifications for any imprecision and inaccuracies, which were derived from inter- and intra-biologic variations. We also proposed allowable limits of analytical error, being less than one-half of the average intra-individual variation for evaluation of imprecision, and less than one-quarter of the inter- plus intra-individual variation for evaluation of inaccuracy. When expressed in coefficient of variation terms, these allowable limits may be applied at a wide range of levels of quantity.

  7. Automation of a spectrophotometric method for measuring L -carnitine in human blood serum.

    PubMed

    Galan, A; Padros, A; Arambarri, M; Martin, S

    1998-01-01

    A spectrometric method for the determination of L-carnitine has been developed based on the reaction of the 5,5' dithiobis-(2-nitrobenzoic) acid (DTNB) and adapted to a Technicon RA-2000 automatic analyser Química Farmacéutica Bayer, S.A.). The detection limit of the method is 13.2 mumol/l, with a measurement interval ranging from 30 to 320 mumoll1. Imprecision and accuracy are good even at levels close to the detection limit (coeffcient of variation of 5.4% for within-run imprecision for a concentration of 35 mumol/l). A good correlation was observed between the method studied and the radiometric method. The method evaluated has suffcient analytical sensitivity to diagnose carnitine deficiencies. The short time period required for sample processing (30 samples in 40min), the simple methodology and apparatus, the ease of personnel training and the low cost of the reagents make this method a good alternative to the classical radiometric method for evaluating serum L-carnitine in clinical laboratories without radioactive installations.

  8. A fit-for-purpose approach to analytical sensitivity applied to a cardiac troponin assay: time to escape the 'highly-sensitive' trap.

    PubMed

    Ungerer, Jacobus P J; Pretorius, Carel J

    2014-04-01

    Highly-sensitive cardiac troponin (cTn) assays are being introduced into the market. In this study we argue that the classification of cTn assays into sensitive and highly-sensitive is flawed and recommend a more appropriate way to characterize analytical sensitivity of cTn assays. The raw data of 2252 cardiac troponin I (cTnI) tests done in duplicate with a 'sensitive' assay was extracted and used to calculate the cTnI levels in all, including those below the 'limit of detection' (LoD) that were censored. Duplicate results were used to determine analytical imprecision. We show that cTnI can be quantified in all samples including those with levels below the LoD and that the actual margins of error decrease as concentrations approach zero. The dichotomous classification of cTn assays into sensitive and highly-sensitive is theoretically flawed and characterizing analytical sensitivity as a continuous variable based on imprecision at 0 and the 99th percentile cut-off would be more appropriate.

  9. Automation of a spectrophotometric method for measuring L -carnitine in human blood serum

    PubMed Central

    Galan, Amparo; Padros, Anna; Arambarri, Marta; Martin, Silvia

    1998-01-01

    A spectrometric method for the determination of L-carnitine has been developed based on the reaction of the 5, 5 ′ dithiobis-(2-nitrobenzoic) acid (DTNB) and adapted to a Technicon RA-2000 automatic analyser Química Farmacéutica Bayer, S.A.). The detection limit of the method is 13.2 μmol/l, with a measurement interval ranging from 30 to 320 μmoll1. Imprecision and accuracy are good even at levels close to the detection limit (coeffcient of variation of 5.4% for within-run imprecision for a concentration of 35 μmol/l). A good correlation was observed between the method studied and the radiometric method. The method evaluated has suffcient analytical sensitivity to diagnose carnitine deficiencies. The short time period required for sample processing (30 samples in 40min), the simple methodology and apparatus, the ease of personnel training and the low cost of the reagents make this method a good alternative to the classical radiometric method for evaluating serum L-carnitine in clinical laboratories without radioactive installations. PMID:18924818

  10. A probabilistic NF2 relational algebra for integrated information retrieval and database systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuhr, N.; Roelleke, T.

    The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such thatmore » the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for typical database queries involving imprecise attribute values as well as for combinations of database and IR queries.« less

  11. Improved protocol and data analysis for accelerated shelf-life estimation of solid dosage forms.

    PubMed

    Waterman, Kenneth C; Carella, Anthony J; Gumkowski, Michael J; Lukulay, Patrick; MacDonald, Bruce C; Roy, Michael C; Shamblin, Sheri L

    2007-04-01

    To propose and test a new accelerated aging protocol for solid-state, small molecule pharmaceuticals which provides faster predictions for drug substance and drug product shelf-life. The concept of an isoconversion paradigm, where times in different temperature and humidity-controlled stability chambers are set to provide a critical degradant level, is introduced for solid-state pharmaceuticals. Reliable estimates for temperature and relative humidity effects are handled using a humidity-corrected Arrhenius equation, where temperature and relative humidity are assumed to be orthogonal. Imprecision is incorporated into a Monte-Carlo simulation to propagate the variations inherent in the experiment. In early development phases, greater imprecision in predictions is tolerated to allow faster screening with reduced sampling. Early development data are then used to design appropriate test conditions for more reliable later stability estimations. Examples are reported showing that predicted shelf-life values for lower temperatures and different relative humidities are consistent with the measured shelf-life values at those conditions. The new protocols and analyses provide accurate and precise shelf-life estimations in a reduced time from current state of the art.

  12. Applications of fuzzy logic to control and decision making

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Jani, Yashvant

    1991-01-01

    Long range space missions will require high operational efficiency as well as autonomy to enhance the effectivity of performance. Fuzzy logic technology has been shown to be powerful and robust in interpreting imprecise measurements and generating appropriate control decisions for many space operations. Several applications are underway, studying the fuzzy logic approach to solving control and decision making problems. Fuzzy logic algorithms for relative motion and attitude control have been developed and demonstrated for proximity operations. Based on this experience, motion control algorithms that include obstacle avoidance were developed for a Mars Rover prototype for maneuvering during the sample collection process. A concept of an intelligent sensor system that can identify objects and track them continuously and learn from its environment is under development to support traffic management and proximity operations around the Space Station Freedom. For safe and reliable operation of Lunar/Mars based crew quarters, high speed controllers with ability to combine imprecise measurements from several sensors is required. A fuzzy logic approach that uses high speed fuzzy hardware chips is being studied.

  13. Can fuzzy logic bring complex problems into focus? Modeling imprecise factors in environmental policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKone, Thomas E.; Deshpande, Ashok W.

    2004-06-14

    In modeling complex environmental problems, we often fail to make precise statements about inputs and outcome. In this case the fuzzy logic method native to the human mind provides a useful way to get at these problems. Fuzzy logic represents a significant change in both the approach to and outcome of environmental evaluations. Risk assessment is currently based on the implicit premise that probability theory provides the necessary and sufficient tools for dealing with uncertainty and variability. The key advantage of fuzzy methods is the way they reflect the human mind in its remarkable ability to store and process informationmore » which is consistently imprecise, uncertain, and resistant to classification. Our case study illustrates the ability of fuzzy logic to integrate statistical measurements with imprecise health goals. But we submit that fuzzy logic and probability theory are complementary and not competitive. In the world of soft computing, fuzzy logic has been widely used and has often been the ''smart'' behind smart machines. But it will require more effort and case studies to establish its niche in risk assessment or other types of impact assessment. Although we often hear complaints about ''bright lines,'' could we adapt to a system that relaxes these lines to fuzzy gradations? Would decision makers and the public accept expressions of water or air quality goals in linguistic terms with computed degrees of certainty? Resistance is likely. In many regions, such as the US and European Union, it is likely that both decision makers and members of the public are more comfortable with our current system in which government agencies avoid confronting uncertainties by setting guidelines that are crisp and often fail to communicate uncertainty. But some day perhaps a more comprehensive approach that includes exposure surveys, toxicological data, epidemiological studies coupled with fuzzy modeling will go a long way in resolving some of the conflict, divisiveness, and controversy in the current regulatory paradigm.« less

  14. Influence of analytical bias and imprecision on the number of false positive results using Guideline-Driven Medical Decision Limits.

    PubMed

    Hyltoft Petersen, Per; Klee, George G

    2014-03-20

    Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Reprint of "Influence of analytical bias and imprecision on the number of false positive results using Guideline-Driven Medical Decision Limits".

    PubMed

    Hyltoft Petersen, Per; Klee, George G

    2014-05-15

    Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2014. Published by Elsevier B.V.

  16. Radiation as a cause of breast cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, N.; Silverstone, S.M.

    1976-09-01

    The possible role of radiation as a factor in the causation of breast cancer was investigated. Some variables said to be associated with a high risk of breast cancer include genetic factors, pre-existing breast disease, artificial menopause, family history of breast cancer, failure to breast feed, older than usual age at time of first pregnancy, high socioeconomic status, specific blood groups, fatty diet, obesity, and hormonal imbalances. To this list we must add ionizing radiation as an additional and serious risk factor in the causation of breast cancer. Among the irradiated groups which have an increase in the incidence ofmore » cancer of the breast are: tuberculous women subjected to repeated fluoroscopy; women who received localized x-ray treatments for acute post-partum mastitis; atom-bomb survivors; other x-ray exposures involving the breast, including irradiation in children and in experimental animals; and women who were treated with x rays for acne or hirsuitism. The dose of radiation received by the survivors of the atom bomb who subsequently developed cancer of the breast ranged from 80 to 800 rads, the tuberculous women who were fluoroscoped received an estimated 50 to 6,000 rads, the women who were treated for mastitis probably were exposed to 30 to 700 rads, and the patients with acne received 100 to 6,000 rads. These imprecise estimates are compared with mammographic doses in the range of 10s of rads to the breast at each examination, an imprecise estimate depending on technique and equipment. However imprecise these estimates may be, it is apparent that younger women are more likely than older women to develop cancer from exposure to radiation. It is pointed out that the American Cancer Society advises that women under 35 years should have mammography only for medical indication, not for so-called screening.« less

  17. Performance characteristics of the ARCHITECT Active-B12 (Holotranscobalamin) assay.

    PubMed

    Merrigan, Stephen D; Owen, William E; Straseski, Joely A

    2015-01-01

    Vitamin B12 (cobalamin) is a necessary cofactor in methionine and succinyl-CoA metabolism. Studies estimate the deficiency prevalence as high as 30% in the elderly population. Ten to thirty percent of circulating cobalamin is bound to transcobalamin (holotranscobalamin, holoTC) which can readily enter cells and is therefore considered the bioactive form. The objective of our study was to evaluate the analytical performance of a high-throughput, automated holoTC assay (ARCHITECT i2000(SR) Active-B12 (Holotranscobalamin)) and compare it to other available methods. Manufacturer-specified limits of blank (LoB), detection (LoD), and quantitation (LoQ), imprecision, interference, and linearity were evaluated for the ARCHITECT HoloTC assay. Residual de-identified serum samples were used to compare the ARCHITECT HoloTC assay with the automated AxSYM Active-B12 (Holotranscobalamin) assay (Abbott Diagnostics) and the manual Active-B12 (Holotranscobalamin) Enzyme Immunoassay (EIA) (Axis-Shield Diagnostics, Dundee, Scotland, UK). Manufacturer's claims of LoB, LoD, LoQ, imprecision, interference, and linearity to the highest point tested (113.4 pmol/L) were verified for the ARCHITECT HoloTC assay. Method comparison of the ARCHITECT HoloTC to the AxSYM HoloTC produced the following Deming regression statistics: (ARCHITECT(HoloTc)) = 0.941 (AxSYM(HoloTC)) + 1.2 pmol/L, S(y/x) = 6.4, r = 0.947 (n = 98). Comparison to the Active-B12 EIA produced: (ARCHITECT(HoloTC)) = 1.105 (EIA(Active-B12)) - 6.8 pmol/L, S(y/x) = 11.0, r = 0.950 (n = 221). This assay performed acceptably for LoB, LoD, LoQ, imprecision, interference, linearity and method comparison to the predicate device (AxSYM). An additional comparison to a manual Active-B12 EIA method performed similarly, with minor exceptions. This study determined that the ARCHITECT HoloTC assay is suitable for routine clinical use, which provides a high-throughput alternative for automated testing of this emerging marker of cobalamin deficiency.

  18. Fuzzy logic, neural networks, and soft computing

    NASA Technical Reports Server (NTRS)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial intelligence. In the years ahead, this may well become a widely held position.

  19. Japanese Society for Laboratory Hematology flow cytometric reference method of determining the differential leukocyte count: external quality assurance using fresh blood samples.

    PubMed

    Kawai, Y; Nagai, Y; Ogawa, E; Kondo, H

    2017-04-01

    To provide target values for the manufacturers' survey of the Japanese Society for Laboratory Hematology (JSLH), accurate standard data from healthy volunteers were needed for the five-part differential leukocyte count. To obtain such data, JSLH required an antibody panel that achieved high specificity (particularly for mononuclear cells) using simple gating procedures. We developed a flow cytometric method for determining the differential leukocyte count (JSLH-Diff) and validated it by comparison with the flow cytometric differential leukocyte count of the International Council for Standardization in Haematology (ICSH-Diff) and the manual differential count obtained by microscopy (Manual-Diff). First, the reference laboratory performed an imprecision study of JSLH-Diff and ICSH-Diff, as well as performing comparison among JSLH-Diff, Manual-Diff, and ICSH-Diff. Then two reference laboratories and seven participating laboratories performed imprecision and accuracy studies of JSLH-Diff, Manual-Diff, and ICSH-Diff. Simultaneously, six manufacturers' laboratories provided their own representative values by using automated hematology analyzers. The precision of both JSLH-Diff and ICSH-Diff methods was adequate. Comparison by the reference laboratory showed that all correlation coefficients, slopes and intercepts obtained by the JSLH-Diff, ICSH-Diff, and Manual-Diff methods conformed to the criteria. When the imprecision and accuracy of JSLH-Diff were assessed at seven laboratories, the CV% for lymphocytes, neutrophils, monocytes, eosinophils, and basophils was 0.5~0.9%, 0.3~0.7%, 1.7~2.6%, 3.0~7.9%, and 3.8~10.4%, respectively. More than 99% of CD45 positive leukocytes were identified as normal leukocytes by JSLH-Diff. When JSLH-Diff method were validated by comparison with Manual-Diff and ICSH-Diff, JSLH-Diff showed good performance as a reference method. © 2016 John Wiley & Sons Ltd.

  20. A Multi Criteria Group Decision-Making Model for Teacher Evaluation in Higher Education Based on Cloud Model and Decision Tree

    ERIC Educational Resources Information Center

    Chang, Ting-Cheng; Wang, Hui

    2016-01-01

    This paper proposes a cloud multi-criteria group decision-making model for teacher evaluation in higher education which is involving subjectivity, imprecision and fuzziness. First, selecting the appropriate evaluation index depending on the evaluation objectives, indicating a clear structural relationship between the evaluation index and…

  1. A rigorous assessment of tree height measurements obtained using airborne LIDAR and conventional field methods.

    Treesearch

    Hans-Erik Andersen; Stephen E. Reutebuch; Robert J. McGaughey

    2006-01-01

    Tree height is an important variable in forest inventory programs but is typically time-consuming and costly to measure in the field using conventional techniques. Airborne light detection and ranging (LIDAR) provides individual tree height measurements that are highly correlated with field-derived measurements, but the imprecision of conventional field techniques does...

  2. Must School Districts Provide Test Protocols to Parents?

    ERIC Educational Resources Information Center

    Rosenfeld, S. James

    2010-01-01

    Despite being well-settled as a matter of law, the issue of whether test protocols must be disclosed to parents continues to be a source of dispute between schools, school psychologists, and parents. To be sure, one of the reasons for this vampire-like existence is the imprecision of the questioners and questions. Moreover, professional guidance…

  3. E-learning, Dual-task, and Cognitive Load: The Anatomy of a Failed Experiment

    ERIC Educational Resources Information Center

    Van Nuland, Sonya E.; Rogers, Kem A.

    2016-01-01

    The rising popularity of commercial anatomy e-learning tools has been sustained, in part, due to increased annual enrollment and a reduction in laboratory hours across educational institutions. While e-learning tools continue to gain popularity, the research methodologies used to investigate their impact on learning remain imprecise. As new user…

  4. Problems Portraying Migrants in Applied Linguistics Research

    ERIC Educational Resources Information Center

    Block, David

    2010-01-01

    This paper is a very personal attempt to explore the problematics of portraying migrants in Applied Linguistics research. I begin with a discussion of identity, in particular what we might mean when we use the term, and from there I go on to explore its fundamental imprecision through an analysis of a census question about ethnicity. I then…

  5. Teaching for Workplace Success. Occasional Paper No. 113.

    ERIC Educational Resources Information Center

    Champagne, Audrey

    Following some years of eclipse by the basics, imparting thinking ability to students is once again emerging as the primary goal of public education. How to teach thinking skills, is, however, subject to question. For example, not only is the domain of the higher order skills broad and imprecisely specified, there is also considerable naivete in…

  6. Evaluating the Impact and Determinants of Student Team Performance: Using LMS and CATME Data

    ERIC Educational Resources Information Center

    Braender, Lynn M.; Naples, Michele I.

    2013-01-01

    Practitioners find it difficult to allocate grades to individual students based on their contributions to the team project. They often use classroom observation of teamwork and student peer evaluations to differentiate an individual's grade from the group's grade, which can be subjective and imprecise. We used objective data from student activity…

  7. Determining site index accurately in even-aged stands

    Treesearch

    Gayne G. Erdmann; Ralph M., Jr. Peterson

    1992-01-01

    Good site index estimates are necessary for intensive forest management. To get tree age used in determining site index, increment cores are commonly used. The diffuse-porous rings of northern hardwoods, though, are difficult to count in cores, so many site index estimates are imprecise. Also, measuring the height of standing trees is more difficult and less accurate...

  8. Movement Planning Reflects Skill Level and Age Changes in Toddlers

    ERIC Educational Resources Information Center

    Chen, Yu-ping; Keen, Rachel; Rosander, Kerstin; Von Hofsten, Claes

    2010-01-01

    Kinematic measures of children's reaching were found to reflect stable differences in skill level for planning for future actions. Thirty-five toddlers (18-21 months) were engaged in building block towers (precise task) and in placing blocks into an open container (imprecise task). Sixteen children were retested on the same tasks a year later.…

  9. An alternative to traditional goodness-of-fit tests for discretely measured continuous data

    Treesearch

    KaDonna C. Randolph; Bill Seaver

    2007-01-01

    Traditional goodness-of-fit tests such as the Kolmogorov-Smirnov and x2 tests are easily applied to data of the continuous or discrete type, respectively. Occasionally, however, the case arises when continuous data are recorded into discrete categories due to an imprecise measurement system. In this instance, the traditional goodness-of-fit...

  10. (Non)Native Speakering: The (Dis)Invention of (Non)Native Speakered Subjectivities in a Graduate Teacher Education Program

    ERIC Educational Resources Information Center

    Aneja, Geeta A.

    2017-01-01

    Despite its imprecision, the native-nonnative dichotomy has become the dominant paradigm for categorizing language users, learners, and educators. The "NNEST Movement" has been instrumental in documenting the privilege of native speakers, the marginalization of their nonnative counterparts, and why an individual may be perceived as one…

  11. Evaluation of work-related diseases by the Italian Institute of Insurance for Professional Illness and Injuries (INAIL).

    PubMed

    Bracci, Carlo; Norcia, Gabriele

    2005-01-01

    Based on a predetermined list of job-related diseases, INAIL makes compensation decisions in cases of work-related diseases in Italy. Its functioning is somewhat hampered by imprecision in the disease classification and infrequency of updating the list, as well as rapid turnover of workers in some categories of jobs.

  12. Cues for Better Writing: Empirical Assessment of a Word Counter and Cueing Application's Effectiveness

    ERIC Educational Resources Information Center

    Vijayasarathy, Leo R.; Gould, Susan Martin; Gould, Michael

    2015-01-01

    Written clarity and conciseness are desired by employers and emphasized in business communication courses. We developed and tested the efficacy of a cueing tool--Scribe Bene--to help students reduce their use of imprecise and ambiguous words and wordy phrases. Effectiveness was measured by comparing cue word usage between a treatment group given…

  13. Data acquisition system for the socal plane detector of the mass separator MASHA

    NASA Astrophysics Data System (ADS)

    Novoselov, A. S.; Rodin, A. M.; Motycak, S.; Podshibyakin, A. V.; Krupa, L.; Belozerov, A. V.; Vedeneyev, V. Yu.; Gulyaev, A. V.; Gulyaeva, A. V.; Kliman, J.; Salamatin, V. S.; Stepantsov, S. V.; Chernysheva, E. V.; Yukhimchuk, S. A.; Komarov, A. B.; Kamas, D.

    2016-09-01

    The results of the development and the general information about the data acquisition system which was recently created at the MASHA setup (Flerov laboratory of nuclear reactions at Joint institute for nuclear research) are presented. The main difference from the previous system is that we use a new modern platform, National Instruments PXI with XIA multichannel high-speed digitizers (250 MHz 12 bit 16 channels). At this moment system has 448 spectrometric channels. The software and its features for the data acquisition and analysis are also described. The new DAQ system expands precision measuring capabilities of alpha decays and spontaneous fission at the focal plane position-sensitive silicon strip detector which, in turn, increases the capabilities of the setup in such a field as low-yield registration of elements.

  14. [Development of multi-channels cardiac electrophysiological polygraph with LabVIEW as software platform and its clinical application].

    PubMed

    Fan, Shounian; Jiang, Yi; Jiang, Chenxi; Yang, Tianhe; Zhang, Chengyun; Liu, Junshi; Wu, Qiang; Zheng, Yaxi; Liu, Xiaoqiao

    2004-10-01

    Polygraph has become a necessary instrument in interventional cardiology and fundamental research of medicine up to the present. In this study, a LabView development system (DS) (developed by NI in U.S.) used as software platform, a DAQ data acquisition module and universal computer used as hardware platform, were creatively coupled with our self-made low noise multi-channels preamplifier to develop Multi-channels electrocardiograph. The device possessed the functions such as real time display of physiological process, digit highpass and lowpass, 50Hz filtered and gain adjustment, instant storing, random playback and printing, and process control stimulation. Besides, it was small-sized, economically practical and easy to operate. It could advance the spread of cardiac intervention treatment in hospitals.

  15. Data acquisition system for segmented reactor antineutrino detector

    NASA Astrophysics Data System (ADS)

    Hons, Z.; Vlášek, J.

    2017-01-01

    This paper describes the data acquisition system used for data readout from the PMT channels of a segmented detector of reactor antineutrinos with active shielding. Theoretical approach to the data acquisition is described and two possible solutions using QDCs and digitizers are discussed. Also described are the results of the DAQ performance during routine data taking operation of DANSS. DANSS (Detector of the reactor AntiNeutrino based on Solid Scintillator) is a project aiming to measure a spectrum of reactor antineutrinos using inverse beta decay (IBD) in a plastic scintillator. The detector is located close to an industrial nuclear reactor core and is covered by passive and active shielding. It is expected to have about 15000 IBD interactions per day. Light from the detector is sensed by PMT and SiPM.

  16. First upper limits on the radar cross section of cosmic-ray induced extensive air showers

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abe, M.; Abou Bakr Othman, M.; Abu-Zayyad, T.; Allen, M.; Anderson, R.; Azuma, R.; Barcikowski, E.; Belz, J. W.; Bergman, D. R.; Besson, D.; Blake, S. A.; Byrne, M.; Cady, R.; Chae, M. J.; Cheon, B. G.; Chiba, J.; Chikawa, M.; Cho, W. R.; Farhang-Boroujeny, B.; Fujii, T.; Fukushima, M.; Gillman, W. H.; Goto, T.; Hanlon, W.; Hanson, J. C.; Hayashi, Y.; Hayashida, N.; Hibino, K.; Honda, K.; Ikeda, D.; Inoue, N.; Ishii, T.; Ishimori, R.; Ito, H.; Ivanov, D.; Jayanthmurthy, C.; Jui, C. C. H.; Kadota, K.; Kakimoto, F.; Kalashev, O.; Kasahara, K.; Kawai, H.; Kawakami, S.; Kawana, S.; Kawata, K.; Kido, E.; Kim, H. B.; Kim, J. H.; Kim, J. H.; Kitamura, S.; Kitamura, Y.; Kunwar, S.; Kuzmin, V.; Kwon, Y. J.; Lan, J.; Lim, S. I.; Lundquist, J. P.; Machida, K.; Martens, K.; Matsuda, T.; Matsuyama, T.; Matthews, J. N.; Minamino, M.; Mukai, K.; Myers, I.; Nagasawa, K.; Nagataki, S.; Nakamura, T.; Nonaka, T.; Nozato, A.; Ogio, S.; Ogura, J.; Ohnishi, M.; Ohoka, H.; Oki, K.; Okuda, T.; Ono, M.; Oshima, A.; Ozawa, S.; Park, I. H.; Prohira, S.; Pshirkov, M. S.; Rezazadeh-Reyhani, A.; Rodriguez, D. C.; Rubtsov, G.; Ryu, D.; Sagawa, H.; Sakurai, N.; Sampson, A. L.; Scott, L. M.; Schurig, D.; Shah, P. D.; Shibata, F.; Shibata, T.; Shimodaira, H.; Shin, B. K.; Smith, J. D.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Stratton, S. R.; Stroman, T. A.; Suzawa, T.; Takai, H.; Takamura, M.; Takeda, M.; Takeishi, R.; Taketa, A.; Takita, M.; Tameda, Y.; Tanaka, H.; Tanaka, K.; Tanaka, M.; Thomas, S. B.; Thomson, G. B.; Tinyakov, P.; Tkachev, I.; Tokuno, H.; Tomida, T.; Troitsky, S.; Tsunesada, Y.; Tsutsumi, K.; Uchihori, Y.; Udo, S.; Urban, F.; Vasiloff, G.; Venkatesh, S.; Wong, T.; Yamane, R.; Yamaoka, H.; Yamazaki, K.; Yang, J.; Yashiro, K.; Yoneda, Y.; Yoshida, S.; Yoshii, H.; Zollinger, R.; Zundel, Z.

    2017-01-01

    TARA (Telescope Array Radar) is a cosmic ray radar detection experiment colocated with Telescope Array, the conventional surface scintillation detector (SD) and fluorescence telescope detector (FD) near Delta, Utah, U.S.A. The TARA detector combines a 40 kW, 54.1 MHz VHF transmitter and high-gain transmitting antenna which broadcasts the radar carrier over the SD array and within the FD field of view, towards a 250 MS/s DAQ receiver. TARA has been collecting data since 2013 with the primary goal of observing the radar signatures of extensive air showers (EAS). Simulations indicate that echoes are expected to be short in duration (∼ 10 μs) and exhibit rapidly changing frequency, with rates on the order 1 MHz/μs. The EAS radar cross-section (RCS) is currently unknown although it is the subject of over 70 years of speculation. A novel signal search technique is described in which the expected radar echo of a particular air shower is used as a matched filter template and compared to waveforms obtained by triggering the radar DAQ using the Telescope Array fluorescence detector. No evidence for the scattering of radio frequency radiation by EAS is obtained to date. We report the first quantitative RCS upper limits using EAS that triggered the Telescope Array Fluorescence Detector. The transmitter is under the direct control of experimenters, and in a radio-quiet area isolated from other radio frequency (RF) sources. The power and radiation pattern are known at all times. Forward power up to 40 kW and gain exceeding 20 dB maximize energy density in the radar field. Continuous wave (CW) transmission gives 100% duty cycle, as opposed to pulsed radar. TARA utilizes a high sample rate DAQ (250 MS/s). TARA is colocated with a large state-of-the-art conventional CR observatory, allowing the radar data stream to be sampled at the arrival times of known cosmic ray events. Each of these attributes of the TARA detector has been discussed in detail in the literature [8]. A map showing the TA SD array and the location of the TARA transmitter and receiver is shown in Fig. 1.Section 2 of this paper includes a description of air shower plasmas and possible radio scattering mechanisms. Theoretical and experimental parameters that influence radio scattering are presented and discussed. We justify use of the thin wire model in a radar echo simulation that predicts echo waveforms, which we will subsequently (Section 6) use in placing limits on the air shower radar cross section (RCS). Sections 3 and 4 describe TARA data and offline processing techniques. In Section 5, we describe the signal search using simulated waveforms as matched filter (MF) templates in order to maximize sensitivity. Section 6 describes the procedure for calculating a scale factor to the RCS model described in Section 2, the results of which are used in placing the first quantitative upper limit on the EAS radar cross-section (RCS). In Section 7 we summarize these results and discuss the viability of radar detection of cosmic rays in light of the TARA findings.

  17. Completion of the LANSCE Proton Storage Ring Control System Upgrade -- A Successful Integration of EPICS Into a Running Control System

    NASA Astrophysics Data System (ADS)

    Schaller, S. C.; Bjorklund, E. A.; Carr, G. P.; Faucett, J. A.; Oothoudt, M. A.

    1997-05-01

    The Los Alamos Neutron Scattering Center (LANSCE) Proton Storage Ring (PSR) control system upgrade was completed in 1996. In previous work, much of a PDP-11-based control system was replaced with Experimental Physics and Industrial Control System (EPICS) controls. Several parts of the old control system which used a VAX for operator displays and direct access to a CAMAC serial highway still remained. The old system was preserved as a "fallback" if the new EPICS-based system had problems. The control system upgrade completion included conversion of several application programs to EPICS-based operator interfaces, moving some data acquisition hardware to EPICS Input-Output Controllers (IOCs), and the implementation of new gateway software to complete the overall control system interoperability. Many operator interface (OPI) screens, written by LANSCE operators, have been incorporated in the new system. The old PSR control system hardware was removed. The robustness and reliability of the new controls obviated the need for a fallback capability.

  18. Modern design of a fast front-end computer

    NASA Astrophysics Data System (ADS)

    Šoštarić, Z.; Anic̈ić, D.; Sekolec, L.; Su, J.

    1994-12-01

    Front-end computers (FEC) at Paul Scherrer Institut provide access to accelerator CAMAC-based sensors and actuators by way of a local area network. In the scope of the new generation FEC project, a front-end is regarded as a collection of services. The functionality of one such service is described in terms of Yourdon's environment, behaviour, processor and task models. The computational model (software representation of the environment) of the service is defined separately, using the information model of the Shlaer-Mellor method, and Sather OO language. In parallel with the analysis and later with the design, a suite of test programmes was developed to evaluate the feasibility of different computing platforms for the project and a set of rapid prototypes was produced to resolve different implementation issues. The past and future aspects of the project and its driving forces are presented. Justification of the choice of methodology, platform and requirement, is given. We conclude with a description of the present state, priorities and limitations of our project.

  19. The TOFp/pVPD time-of-flight system for STAR

    NASA Astrophysics Data System (ADS)

    Llope, W. J.; Geurts, F.; Mitchell, J. W.; Liu, Z.; Adams, N.; Eppley, G.; Keane, D.; Li, J.; Liu, F.; Liu, L.; Mutchler, G. S.; Nussbaum, T.; Bonner, B.; Sappenfield, P.; Zhang, B.; Zhang, W.-M.

    2004-04-01

    A time-of-flight system was constructed for the STAR Experiment for the direct identification of hadrons produced in 197Au+ 197Au collisions at RHIC. The system consists of two separate detector subsystems, one called the Pseudo Vertex Position Detector (pVPD, the "start" detector) and the other called the Time of Flight Patch (TOFp, the "stop" detector). Each detector is based on conventional scintillator/phototube technology and includes custom high-performance front-end electronics and a common CAMAC-based digitization and read-out. The design of the system and its performance during the 2001 RHIC run will be described. The start resolution attained by the pVPD was 24 ps, implying a pVPD single-detector resolution of 58 ps. The total time resolution of the system averaged over all detector channels was 87 ps, allowing direct π/ K/ p discrimination for momenta up to ˜1.8 GeV/ c, and direct ( π+ K)/ p discrimination up to ˜3 GeV/ c.

  20. Computer optimization techniques for NASA Langley's CSI evolutionary model's real-time control system

    NASA Technical Reports Server (NTRS)

    Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff

    1992-01-01

    The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.

  1. Evaluation of a commercial system for CAMAC-based control of the Chalk River Laboratories tandem-accelerator-superconducting-cyclotron complexcomplex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greiner, B.F.; Caswell, D.J.; Slater, W.R.

    1992-04-01

    This paper discusses the control system of the Tandem Accelerator Superconducting Cyclotron (TASCC) of AECL Research at its Chalk River Laboratories which is presently based on a PDP-11 computer and the IAS operating system. The estimated expense of a custom conversion of the system to a current, equivalent operating system is prohibitive. The authors have evaluated a commercial control package from VISTA Control Systems based on VAX microcomputers and the VMS operating system. Vsystem offers a modern, graphical operator interface, an extensive software toolkit for configuration of the system and a multi-feature data-logging capability, all of which far surpass themore » functionality of the present control system. However, the implementation of some familiar, practical features that TASCC operators find to be essential has proven to be challenging. The assessment of Vsystem, which is described in terms of presently perceived strengths and weaknesses, is, on balance, very positive.« less

  2. Education & Public Policy in Bogotá: Guarding the Public Interest

    ERIC Educational Resources Information Center

    Parra, Juan David

    2009-01-01

    High school education appears to be a key variable for the economic prosperity of Bogotá. However, the lack of consideration of quality as a necessary standard for education in the city threatens its potential to positively affect social welfare. One of the main problems emerges from an imprecise conception of education as a public good, which is…

  3. In Pursuit of Excellence: The Past as Prologue to a Brighter Future for Special Education

    ERIC Educational Resources Information Center

    Rock, Marcia L.; Thead, Beth K.; Gable, Robert A.; Hardman, Michael L.; Van Acker, Richard

    2006-01-01

    Any attempt to speculate about the future is likely to be hindered by an imprecise and inadequate understanding of the past, and predicting the future of special education is no exception. Nonetheless, various authors have undertaken that challenge and have done so by examining critically the relatively turbulent, albeit brief history of the…

  4. Automatic Requirements Specification Extraction from Natural Language (ARSENAL)

    DTIC Science & Technology

    2014-10-01

    designers, implementers) involved in the design of software systems. However, natural language descriptions can be informal, incomplete, imprecise...communication of technical descriptions between the various stakeholders (e.g., customers, designers, imple- menters) involved in the design of software systems...the accuracy of the natural language processing stage, the degree of automation, and robustness to noise. 1 2 Introduction Software systems operate in

  5. A Definition of University Teaching: A Perhaps-Swiftean Modest Proposal

    ERIC Educational Resources Information Center

    Jenner, Donald

    2009-01-01

    "Teaching" is usually used in the Academy without a clear sense of what is meant; the result is imprecise and ineffective teaching. The standard lines-- that teaching is a matter of applying approved methods, that teaching is mostly a matter of teaching skills-as-means to some career or whatever--are reflective of failure in the Academy, measured…

  6. The Beck Depression Inventory, Second Edition (BDI-II): A Cross-Sample Structural Analysis

    ERIC Educational Resources Information Center

    Strunk, Kamden K.; Lane, Forrest C.

    2017-01-01

    A common concern about the Beck Depression Inventory, Second edition (BDI-II) among researchers in the area of depression has long been the single-factor scoring scheme. Methods exist for making cross-sample comparisons of latent structure but tend to rely on estimation methods that can be imprecise and unnecessarily complex. This study presents a…

  7. Fuzzy Logic as a Tool for Assessing Students' Knowledge and Skills

    ERIC Educational Resources Information Center

    Voskoglou, Michael Gr.

    2013-01-01

    Fuzzy logic, which is based on fuzzy sets theory introduced by Zadeh in 1965, provides a rich and meaningful addition to standard logic. The applications which may be generated from or adapted to fuzzy logic are wide-ranging and provide the opportunity for modeling under conditions which are imprecisely defined. In this article we develop a fuzzy…

  8. Capable Reader Program: Language Arts. Volume II. Objectives for Units B2 through B4. Bulletin No. 335.

    ERIC Educational Resources Information Center

    Long, Sandra; And Others

    Part of a curriculum series for academically gifted elementary students in the area of reading, the document presents objectives and activities for language arts instruction. There are three major objectives: (1) recognizing persuasive use of words, vague and imprecise words, multiple meanings conveyed by a single word, and propaganda techniques;…

  9. Wood flour

    Treesearch

    Craig M. Clemons; Daniel F. Caufield

    2005-01-01

    The term “wood flour” is somewhat ambiguous. Reineke states that the term wood flour “is applied somewhat loosely to wood reduced to finely divided particles approximating those of cereal flours in size, appearance, and texture”. Though its definition is imprecise, the term wood flour is in common use. Practically speaking, wood flour usually refers to wood particles...

  10. On the use and usefulness of fuzzy sets in medical AI.

    PubMed

    Steimann, F

    2001-01-01

    Since its inception fuzzy set theory has been regarded as a formalism suitable to deal with the imprecision intrinsic to many medical problems. Based on a literature survey on the first 30 years, we investigate the impact fuzzy set theory has had on the work in medical AI and point out what it is most appreciated for.

  11. An Evaluation of Empirical Bayes's Estimation of Value-Added Teacher Performance Measures

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul N.; Wooldridge, Jeffrey M.

    2015-01-01

    Empirical Bayes's (EB) estimation has become a popular procedure used to calculate teacher value added, often as a way to make imprecise estimates more reliable. In this article, we review the theory of EB estimation and use simulated and real student achievement data to study the ability of EB estimators to properly rank teachers. We compare the…

  12. Nondestructive methods for the structural evaluation of wood floor systems in historic buildings : preliminary results : [abstract

    Treesearch

    Zhiyong Cai; Michael O. Hunt; Robert J. Ross; Lawrence A. Soltis

    1999-01-01

    To date, there is no standard method for evaluating the structural integrity of wood floor systems using nondestructive techniques. Current methods of examination and assessment are often subjective and therefore tend to yield imprecise or variable results. For this reason, estimates of allowable wood floor loads are often conservative. The assignment of conservatively...

  13. The Many Faces of Ephraim: In Search of A Functional Typology of Rural Areas.

    ERIC Educational Resources Information Center

    Whitaker, William H.

    The literature of social work and rural sociology lacks conceptualization of the term "rural" and treats the term imprecisely. According to a 1960 survey, authors dealing with rural/urban differences do not agree on the attributes of "rural." However, if the rural concept is to be a useful analytical tool and guide to social work practice, its…

  14. Wood flour

    Treesearch

    Craig M. Clemons

    2010-01-01

    The term “wood flour” is somewhat ambiguous. Reineke states that the term wood flour “is applied somewhat loosely to wood reduced to finely divided particles approximating those of cereal flours in size, appearance, and texture.” Though its definition is imprecise, the term wood flour is in common use. Practically speaking, wood flour usually refers to wood particles...

  15. Global Distribution of Businesses Marketing Stem Cell-Based Interventions.

    PubMed

    Berger, Israel; Ahmad, Amina; Bansal, Akhil; Kapoor, Tanvir; Sipp, Douglas; Rasko, John E J

    2016-08-04

    A structured search reveals that online marketing of stem-cell-based interventions is skewed toward developed economies including the United States, Ireland, Australia, and Germany. Websites made broad, imprecise therapeutic claims and frequently failed to detail procedures. Widespread marketing poses challenges to regulators, bioethicists, and those seeking realistic hope from therapies. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Childbearing and Women's Health.

    ERIC Educational Resources Information Center

    Bouvier-Colle, Marie-Helene; And Others

    1990-01-01

    A maternal death is defined as the death of a woman while she is pregnant or within 42 days of the end of her pregnancy from any cause related to or aggravated by the pregnancy or its management, but not from accidental or incidental causes. No one knows exactly how many women die from bearing children because data is imprecise in most countries.…

  17. The 1990 Transfer Assembly: Proceedings (Los Angeles, California, March 15-16, 1990).

    ERIC Educational Resources Information Center

    Williams, Dana Nicole, Ed.

    In March 1990, the Center for the Study of Community Colleges in Los Angeles hosted a Transfer Assembly as part of an on-going effort to stabilize the imprecise definitions and data relating to student transfers from community colleges to four-year institutions. This report provides excerpts from six of the presentations given to the assembly.…

  18. Over-reporting significant figures--a significant problem?

    PubMed

    Hawkins, Robert C; Badrick, Tony; Hickman, Peter E

    2007-01-01

    Excessive use of significant figures in numerical data gives a spurious impression of laboratory imprecision to clinicians. We describe reporting practices in 24 Asia-Pacific laboratories, assess whether these reporting formats and those used in the literature can be justified based on actual laboratory performance and outline how to choose the appropriate number of significant places. Thirty-two laboratories in Asia-Pacific were surveyed as to their reporting practices for serum creatinine, ferritin, sodium and TSH. Imprecision data from the General Serum Chemistry program from the RCPA-AACB Quality Assurance Program (QAP) were used to assess whether the reporting unit magnitude implicitly suggested in Tietz, the RCPA Manual and the General Serum Chemistry program itself was justified. There was a 75% response rate to the survey, with laboratories generally reporting data using unjustifiable deciles. Unit sizes from the RCPA manual, Tietz and the RCPA-AACB QAP were not justified by the majority of laboratories in the RCPA-AACB QAP. The reporting unit size used by many laboratories is not justified by present laboratory performance using a 95% probability level. A consensus on appropriate reporting unit size is needed to encourage laboratories to change their present reporting formats.

  19. Performance Evaluation of the Sysmex CS-5100 Automated Coagulation Analyzer.

    PubMed

    Chen, Liming; Chen, Yu

    2015-01-01

    Coagulation testing is widely applied clinically, and laboratories increasingly demand automated coagulation analyzers with short turn-around times and high-throughput. The purpose of this study was to evaluate the performance of the Sysmex CS-5100 automated coagulation analyzer for routine use in a clinical laboratory. The prothrombin time (PT), international normalized ratio (INR), activated partial thromboplastin time (APTT), fibrinogen (Fbg), and D-dimer were compared between the Sysmex CS-5100 and Sysmex CA-7000 analyzers, and the imprecision, comparison, throughput, STAT function, and performance for abnormal samples were measured in each. The within-run and between-run coefficients of variation (CV) for the PT, APTT, INR, and D-dimer analyses showed excellent results both in the normal and pathologic ranges. The correlation coefficients between the Sysmex CS-5100 and Sysmex CA-7000 were highly correlated. The throughput of the Sysmex CS-5100 was faster than that of the Sysmex CA-7000. There was no interference at all by total bilirubin concentrations and triglyceride concentrations in the Sysmex CS-5100 analyzer. We demonstrated that the Sysmex CS-5100 performs with satisfactory imprecision and is well suited for coagulation analysis in laboratories processing large sample numbers and icteric and lipemic samples.

  20. Determination of vitamin E in human plasma by high-performance liquid chromatography.

    PubMed

    Cooper, J D; Thadwal, R; Cooper, M J

    1997-03-07

    The use of selective protein precipitation to enhance the recovery of vitamin E from plasma, by minimising binding with very-low-density lipoproteins, is reported. The procedure employed treatment of plasma with magnesium chloride and tungstate, followed by methanol protein precipitation. Separation of vitamin E was performed using reversed-phase high-performance liquid chromatography of the methanol extracts with subsequent UV detection of the compound. Using this technique the procedure was observed to be specific for vitamin E and linear over the range 1.0 to 40.0 micrograms/ml. The within-run imprecision (C.V.) at three different supplemented plasma vitamin E concentrations of 5.0, 10.0 and 20.0 micrograms/ml was 4.51, 3.33 and 2.58%, respectively, and the between-run imprecision (C.V.) estimated to be 5.19, 3.69 and 3.67%, respectively. With the same supplemented plasma vitamin E concentrations, the overall accuracy (bias) of the procedure, using an albumin matrix for calibration, was estimated to be 6.0, -5.0 and -3.5%, respectively, and the recovery of vitamin E from six different spiked plasma samples estimated to be 98.2 +/- 2.6%.

Top