Sample records for host computing system

  1. A real time data acquisition system using the MIL-STD-1553B bus. [for transmission of data to host computer for control law processing

    NASA Technical Reports Server (NTRS)

    Peri, Frank, Jr.

    1992-01-01

    A flight digital data acquisition system that uses the MIL-STD-1553B bus for transmission of data to a host computer for control law processing is described. The instrument, the Remote Interface Unit (RIU), can accommodate up to 16 input channels and eight output channels. The RIU employs a digital signal processor to perform local digital filtering before sending data to the host. The system allows flexible sensor and actuator data organization to facilitate quick control law computations on the host computer. The instrument can also run simple control laws autonomously without host intervention. The RIU and host computer together have replaced a similar larger, ground minicomputer system with favorable results.

  2. The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    NASA Technical Reports Server (NTRS)

    Kusmanoff, Antone; Martin, Nancy L.

    1989-01-01

    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.

  3. Preliminary ISIS users manual

    NASA Technical Reports Server (NTRS)

    Grantham, C.

    1979-01-01

    The Interactive Software Invocation (ISIS), an interactive data management system, was developed to act as a buffer between the user and host computer system. The user is provided by ISIS with a powerful system for developing software or systems in the interactive environment. The user is protected from the idiosyncracies of the host computer system by providing such a complete range of capabilities that the user should have no need for direct access to the host computer. These capabilities are divided into four areas: desk top calculator, data editor, file manager, and tool invoker.

  4. Monitoring system including an electronic sensor platform and an interrogation transceiver

    DOEpatents

    Kinzel, Robert L.; Sheets, Larry R.

    2003-09-23

    A wireless monitoring system suitable for a wide range of remote data collection applications. The system includes at least one Electronic Sensor Platform (ESP), an Interrogator Transceiver (IT) and a general purpose host computer. The ESP functions as a remote data collector from a number of digital and analog sensors located therein. The host computer provides for data logging, testing, demonstration, installation checkout, and troubleshooting of the system. The IT transmits signals from one or more ESP's to the host computer to the ESP's. The IT host computer may be powered by a common power supply, and each ESP is individually powered by a battery. This monitoring system has an extremely low power consumption which allows remote operation of the ESP for long periods; provides authenticated message traffic over a wireless network; utilizes state-of-health and tamper sensors to ensure that the ESP is secure and undamaged; has robust housing of the ESP suitable for use in radiation environments; and is low in cost. With one base station (host computer and interrogator transceiver), multiple ESP's may be controlled at a single monitoring site.

  5. Active optical control system design of the SONG-China Telescope

    NASA Astrophysics Data System (ADS)

    Ye, Yu; Kou, Songfeng; Niu, Dongsheng; Li, Cheng; Wang, Guomin

    2012-09-01

    The standard SONG node structure of control system is presented. The active optical control system of the project is a distributed system, and a host computer and a slave intelligent controller are included. The host control computer collects the information from wave front sensor and sends commands to the slave computer to realize a closed loop model. For intelligent controller, a programmable logic controller (PLC) system is used. This system combines with industrial personal computer (IPC) and PLC to make up a control system with powerful and reliable.

  6. Arranging computer architectures to create higher-performance controllers

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1988-01-01

    Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.

  7. Polymorphous computing fabric

    DOEpatents

    Wolinski, Christophe Czeslaw [Los Alamos, NM; Gokhale, Maya B [Los Alamos, NM; McCabe, Kevin Peter [Los Alamos, NM

    2011-01-18

    Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

  8. Agile Development of Various Computational Power Adaptive Web-Based Mobile-Learning Software Using Mobile Cloud Computing

    ERIC Educational Resources Information Center

    Zadahmad, Manouchehr; Yousefzadehfard, Parisa

    2016-01-01

    Mobile Cloud Computing (MCC) aims to improve all mobile applications such as m-learning systems. This study presents an innovative method to use web technology and software engineering's best practices to provide m-learning functionalities hosted in a MCC-learning system as service. Components hosted by MCC are used to empower developers to create…

  9. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  10. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1996-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  11. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1997-12-09

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  12. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1999-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  13. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  14. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1997-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  15. Special purpose parallel computer architecture for real-time control and simulation in robotic applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1993-01-01

    This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.

  16. DBSecSys: a database of Burkholderia mallei secretion systems.

    PubMed

    Memišević, Vesna; Kumar, Kamal; Cheng, Li; Zavaljevski, Nela; DeShazer, David; Wallqvist, Anders; Reifman, Jaques

    2014-07-16

    Bacterial pathogenicity represents a major public health concern worldwide. Secretion systems are a key component of bacterial pathogenicity, as they provide the means for bacterial proteins to penetrate host-cell membranes and insert themselves directly into the host cells' cytosol. Burkholderia mallei is a Gram-negative bacterium that uses multiple secretion systems during its host infection life cycle. To date, the identities of secretion system proteins for B. mallei are not well known, and their pathogenic mechanisms of action and host factors are largely uncharacterized. We present the Database of Burkholderia malleiSecretion Systems (DBSecSys), a compilation of manually curated and computationally predicted bacterial secretion system proteins and their host factors. Currently, DBSecSys contains comprehensive experimentally and computationally derived information about B. mallei strain ATCC 23344. The database includes 143 B. mallei proteins associated with five secretion systems, their 1,635 human and murine interacting targets, and the corresponding 2,400 host-B. mallei interactions. The database also includes information about 10 pathogenic mechanisms of action for B. mallei secretion system proteins inferred from the available literature. Additionally, DBSecSys provides details about 42 virulence attenuation experiments for 27 B. mallei secretion system proteins. Users interact with DBSecSys through a Web interface that allows for data browsing, querying, visualizing, and downloading. DBSecSys provides a comprehensive, systematically organized resource of experimental and computational data associated with B. mallei secretion systems. It provides the unique ability to study secretion systems not only through characterization of their corresponding pathogen proteins, but also through characterization of their host-interacting partners.The database is available at https://applications.bhsai.org/dbsecsys.

  17. Computer-Based Internet-Hosted Assessment of L2 Literacy: Computerizing and Administering of the Oxford Quick Placement Test in ExamView and Moodle

    NASA Astrophysics Data System (ADS)

    Meurant, Robert C.

    Sorting of Korean English-as-a-Foreign-Language (EFL) university students by Second Language (L2) aptitude allocates students to classes of compatible ability level, and was here used to screen candidates for interview. Paper-and-pen versions of the Oxford Quick Placement Test were adapted to computer-based testing via online hosting using FSCreations ExamView. Problems with their online hosting site led to conversion to the popular computer-based learning management system Moodle, hosted on www.ninehub.com. 317 sophomores were tested online to encourage L2 digital literacy. Strategies for effective hybrid implementation of Learning Management Systems in L2 tertiary education include computer-based Internet-hosted L2 aptitude tests. These potentially provide a convenient measure of student progress in developing L2 fluency, and offer a more objective and relevant means of teacher- and course-assessment than student evaluations, which tend to confuse entertainment value and teacher popularity with academic credibility and pedagogical effectiveness.

  18. Mobile code security

    NASA Astrophysics Data System (ADS)

    Ramalingam, Srikumar

    2001-11-01

    A highly secure mobile agent system is very important for a mobile computing environment. The security issues in mobile agent system comprise protecting mobile hosts from malicious agents, protecting agents from other malicious agents, protecting hosts from other malicious hosts and protecting agents from malicious hosts. Using traditional security mechanisms the first three security problems can be solved. Apart from using trusted hardware, very few approaches exist to protect mobile code from malicious hosts. Some of the approaches to solve this problem are the use of trusted computing, computing with encrypted function, steganography, cryptographic traces, Seal Calculas, etc. This paper focuses on the simulation of some of these existing techniques in the designed mobile language. Some new approaches to solve malicious network problem and agent tampering problem are developed using public key encryption system and steganographic concepts. The approaches are based on encrypting and hiding the partial solutions of the mobile agents. The partial results are stored and the address of the storage is destroyed as the agent moves from one host to another host. This allows only the originator to make use of the partial results. Through these approaches some of the existing problems are solved.

  19. Distributed solar radiation fast dynamic measurement for PV cells

    NASA Astrophysics Data System (ADS)

    Wan, Xuefen; Yang, Yi; Cui, Jian; Du, Xingjing; Zheng, Tao; Sardar, Muhammad Sohail

    2017-10-01

    To study the operating characteristics about PV cells, attention must be given to the dynamic behavior of the solar radiation. The dynamic behaviors of annual, monthly, daily and hourly averages of solar radiation have been studied in detail. But faster dynamic behaviors of solar radiation need more researches. The solar radiation random fluctuations in minute-long or second-long range, which lead to alternating radiation and cool down/warm up PV cell frequently, decrease conversion efficiency. Fast dynamic processes of solar radiation are mainly relevant to stochastic moving of clouds. Even in clear sky condition, the solar irradiations show a certain degree of fast variation. To evaluate operating characteristics of PV cells under fast dynamic irradiation, a solar radiation measuring array (SRMA) based on large active area photodiode, LoRa spread spectrum communication and nanoWatt MCU is proposed. This cross photodiodes structure tracks fast stochastic moving of clouds. To compensate response time of pyranometer and reduce system cost, the terminal nodes with low-cost fast-responded large active area photodiode are placed besides positions of tested PV cells. A central node, consists with pyranometer, large active area photodiode, wind detector and host computer, is placed in the center of the central topologies coordinate to scale temporal envelope of solar irradiation and get calibration information between pyranometer and large active area photodiodes. In our SRMA system, the terminal nodes are designed based on Microchip's nanoWatt XLP PIC16F1947. FDS-100 is adopted for large active area photodiode in terminal nodes and host computer. The output current and voltage of each PV cell are monitored by I/V measurement. AS62-T27/SX1278 LoRa communication modules are used for communicating between terminal nodes and host computer. Because the LoRa LPWAN (Low Power Wide Area Network) specification provides seamless interoperability among Smart Things without the need of complex local installations, configuring of our SRMA system is very easy. Lora also provides SRMA a means to overcome the short communication distance and weather signal propagation decline such as in ZigBee and WiFi. The host computer in SRMA system uses the low power single-board PC EMB-3870 which was produced by NORCO. Wind direction sensor SM5386B and wind-force sensor SM5387B are installed to host computer through RS-485 bus for wind reference data collection. And Davis 6450 solar radiation sensor, which is a precision instrument that detects radiation at wavelengths of 300 to 1100 nanometers, allow host computer to follow real-time solar radiation. A LoRa polling scheme is adopt for the communication between host computer and terminal nodes in SRMA. An experimental SRMA has been established. This system was tested in Ganyu, Jiangshu province from May to August, 2016. In the test, the distances between the nodes and the host computer were between 100m and 1900m. At work, SRMA system showed higher reliability. Terminal nodes could follow the instructions from host computer and collect solar radiation data of distributed PV cells effectively. And the host computer managed the SRAM and achieves reference parameters well. Communications between the host computer and terminal nodes were almost unaffected by the weather. In conclusion, the testing results show that SRMA could be a capable method for fast dynamic measuring about solar radiation and related PV cell operating characteristics.

  20. Some key considerations in evolving a computer system and software engineering support environment for the space station program

    NASA Technical Reports Server (NTRS)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.

  1. An Electronic Pressure Profile Display system for aeronautic test facilities

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.

    1990-01-01

    The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPI) unit which interfaces with a host computer. The host computer collects the pressure data from the DPI unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.

  2. An electronic pressure profile display system for aeronautic test facilities

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.

    1990-01-01

    The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPT) unit which interfaces with a host computer. The host computer collects the pressure data from the DPT unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.

  3. General-Purpose Serial Interface For Remote Control

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Gupton, Lawrence E.

    1990-01-01

    Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.

  4. Microcomputer software development facilities

    NASA Technical Reports Server (NTRS)

    Gorman, J. S.; Mathiasen, C.

    1980-01-01

    A more efficient and cost effective method for developing microcomputer software is to utilize a host computer with high-speed peripheral support. Application programs such as cross assemblers, loaders, and simulators are implemented in the host computer for each of the microcomputers for which software development is a requirement. The host computer is configured to operate in a time share mode for multiusers. The remote terminals, printers, and down loading capabilities provided are based on user requirements. With this configuration a user, either local or remote, can use the host computer for microcomputer software development. Once the software is developed (through the code and modular debug stage) it can be downloaded to the development system or emulator in a test area where hardware/software integration functions can proceed. The microcomputer software program sources reside in the host computer and can be edited, assembled, loaded, and then downloaded as required until the software development project has been completed.

  5. Cooperative processing user interfaces for AdaNET

    NASA Technical Reports Server (NTRS)

    Gutzmann, Kurt M.

    1991-01-01

    A cooperative processing user interface (CUI) system shares the task of graphical display generation and presentation between the user's computer and a remote host. The communications link between the two computers is typically a modem or Ethernet. The two main purposes of a CUI are reduction of the amount of data transmitted between user and host machines, and provision of a graphical user interface system to make the system easier to use.

  6. A remote monitoring system for patients with implantable ventricular assist devices with a personal handy phone system.

    PubMed

    Okamoto, E; Shimanaka, M; Suzuki, S; Baba, K; Mitamura, Y

    1999-01-01

    The usefulness of a remote monitoring system that uses a personal handy phone for artificial heart implanted patients was investigated. The type of handy phone used in this study was a personal handy phone system (PHS), which is a system developed in Japan that uses the NTT (Nippon Telephone and Telegraph, Inc.) telephone network service. The PHS has several advantages: high-speed data transmission, low power output, little electromagnetic interference with medical devices, and easy locating of patients. In our system, patients have a mobile computer (Toshiba, Libretto 50, Kawasaki, Japan) for data transmission control between an implanted controller and a host computer (NEC, PC-9821V16) in the hospital. Information on the motor rotational angle (8 bits) and motor current (8 bits) of the implanted motor driven heart is fed into the mobile computer from the implanted controller (Hitachi, H8/532, Yokohama, Japan) according to 32-bit command codes from the host computer. Motor current and motor rotational angle data from inside the body are framed together by a control code (frame number and parity) for data error checking and correcting at the receiving site, and the data are sent through the PHS connection to the mobile computer. The host computer calculates pump outflow and arterial pressure from the motor rotational angle and motor current values and displays the data in real-time waveforms. The results of this study showed that accurate data on motor rotational angle and current could be transmitted from the subjects while they were walking or driving a car to the host computer at a data transmission rate of 9600 bps. This system is useful for remote monitoring of patients with an implanted artificial heart.

  7. A system for distributed intrusion detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snapp, S.R.; Brentano, J.; Dias, G.V.

    1991-01-01

    The study of providing security in computer networks is a rapidly growing area of interest because the network is the medium over which most attacks or intrusions on computer systems are launched. One approach to solving this problem is the intrusion-detection concept, whose basic premise is that not only abandoning the existing and huge infrastructure of possibly-insecure computer and network systems is impossible, but also replacing them by totally-secure systems may not be feasible or cost effective. Previous work on intrusion-detection systems were performed on stand-alone hosts and on a broadcast local area network (LAN) environment. The focus of ourmore » present research is to extend our network intrusion-detection concept from the LAN environment to arbitarily wider areas with the network topology being arbitrary as well. The generalized distributed environment is heterogeneous, i.e., the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers, like our previous work, a network security monitor (NSM), as well. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager in each host; a LAN manager for monitoring each LAN in the system; and a central manager which is placed at a single secure location and which receives reports from various host and LAN managers to process these reports, correlate them, and detect intrusions. 11 refs., 2 figs.« less

  8. Documentary of MFENET, a national computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuttleworth, B.O.

    1977-06-01

    The national Magnetic Fusion Energy Computer Network (MFENET) is a newly operational star network of geographically separated heterogeneous hosts and a communications subnetwork of PDP-11 processors. Host processors interfaced to the subnetwork currently include a CDC 7600 at the Central Computer Center (CCC) and several DECsystem-10's at User Service Centers (USC's). The network was funded by a U.S. government agency (ERDA) to provide in an economical manner the needed computational resources to magnetic confinement fusion researchers. Phase I operation of MFENET distributed the processing power of the CDC 7600 among the USC's through the provision of file transport between anymore » two hosts and remote job entry to the 7600. Extending the capabilities of Phase I, MFENET Phase II provided interactive terminal access to the CDC 7600 from the USC's. A file management system is maintained at the CCC for all network users. The history and development of MFENET are discussed, with emphasis on the protocols used to link the host computers and the USC software. Comparisons are made of MFENET versus ARPANET (Advanced Research Projects Agency Computer Network) and DECNET (Digital Distributed Network Architecture). DECNET and MFENET host-to host, host-to-CCP, and link protocols are discussed in detail. The USC--CCP interface is described briefly. 43 figures, 2 tables.« less

  9. Design of cylindrical pipe automatic welding control system based on STM32

    NASA Astrophysics Data System (ADS)

    Chen, Shuaishuai; Shen, Weicong

    2018-04-01

    The development of modern economy makes the demand for pipeline construction and construction rapidly increasing, and the pipeline welding has become an important link in pipeline construction. At present, there are still a large number of using of manual welding methods at home and abroad, and field pipe welding especially lacks miniature and portable automatic welding equipment. An automated welding system consists of a control system, which consisting of a lower computer control panel and a host computer operating interface, as well as automatic welding machine mechanisms and welding power systems in coordination with the control system. In this paper, a new control system of automatic pipe welding based on the control panel of the lower computer and the interface of the host computer is proposed, which has many advantages over the traditional automatic welding machine.

  10. Calculating binding free energies of host-guest systems using the AMOEBA polarizable force field.

    PubMed

    Bell, David R; Qi, Rui; Jing, Zhifeng; Xiang, Jin Yu; Mejias, Christopher; Schnieders, Michael J; Ponder, Jay W; Ren, Pengyu

    2016-11-09

    Molecular recognition is of paramount interest in many applications. Here we investigate a series of host-guest systems previously used in the SAMPL4 blind challenge by using molecular simulations and the AMOEBA polarizable force field. The free energy results computed by Bennett's acceptance ratio (BAR) method using the AMOEBA polarizable force field ranked favorably among the entries submitted to the SAMPL4 host-guest competition [Muddana, et al., J. Comput.-Aided Mol. Des., 2014, 28, 305-317]. In this work we conduct an in-depth analysis of the AMOEBA force field host-guest binding thermodynamics by using both BAR and the orthogonal space random walk (OSRW) methods. The binding entropy-enthalpy contributions are analyzed for each host-guest system. For systems of inordinate binding entropy-enthalpy values, we further examine the hydrogen bonding patterns and configurational entropy contribution. The binding mechanism of this series of host-guest systems varies from ligand to ligand, driven by enthalpy and/or entropy changes. Convergence of BAR and OSRW binding free energy methods is discussed. Ultimately, this work illustrates the value of molecular modelling and advanced force fields for the exploration and interpretation of binding thermodynamics.

  11. Enhancing the role of veterinary vaccines reducing zoonotic diseases of humans: Linking systems biology with vaccine development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Leslie G.; Khare, Sangeeta; Lawhon, Sara D.

    The aim of research on infectious diseases is their prevention, and brucellosis and salmonellosis as such are classic examples of worldwide zoonoses for application of a systems biology approach for enhanced rational vaccine development. When used optimally, vaccines prevent disease manifestations, reduce transmission of disease, decrease the need for pharmaceutical intervention, and improve the health and welfare of animals, as well as indirectly protecting against zoonotic diseases of people. Advances in the last decade or so using comprehensive systems biology approaches linking genomics, proteomics, bioinformatics, and biotechnology with immunology, pathogenesis and vaccine formulation and delivery are expected to enable enhancedmore » approaches to vaccine development. The goal of this paper is to evaluate the role of computational systems biology analysis of host:pathogen interactions (the interactome) as a tool for enhanced rational design of vaccines. Systems biology is bringing a new, more robust approach to veterinary vaccine design based upon a deeper understanding of the host pathogen interactions and its impact on the host's molecular network of the immune system. A computational systems biology method was utilized to create interactome models of the host responses to Brucella melitensis (BMEL), Mycobacterium avium paratuberculosis (MAP), Salmonella enterica Typhimurium (STM), and a Salmonella mutant (isogenic *sipA, sopABDE2) and linked to the basis for rational development of vaccines for brucellosis and salmonellosis as reviewed by Adams et al. and Ficht et al. [1,2]. A bovine ligated ileal loop biological model was established to capture the host gene expression response at multiple time points post infection. New methods based on Dynamic Bayesian Network (DBN) machine learning were employed to conduct a comparative pathogenicity analysis of 219 signaling and metabolic pathways and 1620 gene ontology (GO) categories that defined the host's biosignatures to each infectious condition. Through this DBN computational approach, the method identified significantly perturbed pathways and GO category groups of genes that define the pathogenicity signatures of the infectious agent. Our preliminary results provide deeper understanding of the overall complexity of host innate immune response as well as the identification of host gene perturbations that defines a unique host temporal biosignature response to each pathogen. The application of advanced computational methods for developing interactome models based on DBNs has proven to be instrumental in elucidating novel host responses and improved functional biological insight into the host defensive mechanisms. Evaluating the unique differences in pathway and GO perturbations across pathogen conditions allowed the identification of plausible host pathogen interaction mechanisms. Accordingly, a systems biology approach to study molecular pathway gene expression profiles of host cellular responses to microbial pathogens holds great promise as a methodology to identify, model and predict the overall dynamics of the host pathogen interactome. Thus, we propose that such an approach has immediate application to the rational design of brucellosis and salmonellosis vaccines.« less

  12. Enhancing the role of veterinary vaccines reducing zoonotic diseases of humans: linking systems biology with vaccine development.

    PubMed

    Adams, L Garry; Khare, Sangeeta; Lawhon, Sara D; Rossetti, Carlos A; Lewin, Harris A; Lipton, Mary S; Turse, Joshua E; Wylie, Dennis C; Bai, Yu; Drake, Kenneth L

    2011-09-22

    The aim of research on infectious diseases is their prevention, and brucellosis and salmonellosis as such are classic examples of worldwide zoonoses for application of a systems biology approach for enhanced rational vaccine development. When used optimally, vaccines prevent disease manifestations, reduce transmission of disease, decrease the need for pharmaceutical intervention, and improve the health and welfare of animals, as well as indirectly protecting against zoonotic diseases of people. Advances in the last decade or so using comprehensive systems biology approaches linking genomics, proteomics, bioinformatics, and biotechnology with immunology, pathogenesis and vaccine formulation and delivery are expected to enable enhanced approaches to vaccine development. The goal of this paper is to evaluate the role of computational systems biology analysis of host:pathogen interactions (the interactome) as a tool for enhanced rational design of vaccines. Systems biology is bringing a new, more robust approach to veterinary vaccine design based upon a deeper understanding of the host-pathogen interactions and its impact on the host's molecular network of the immune system. A computational systems biology method was utilized to create interactome models of the host responses to Brucella melitensis (BMEL), Mycobacterium avium paratuberculosis (MAP), Salmonella enterica Typhimurium (STM), and a Salmonella mutant (isogenic ΔsipA, sopABDE2) and linked to the basis for rational development of vaccines for brucellosis and salmonellosis as reviewed by Adams et al. and Ficht et al. [1,2]. A bovine ligated ileal loop biological model was established to capture the host gene expression response at multiple time points post infection. New methods based on Dynamic Bayesian Network (DBN) machine learning were employed to conduct a comparative pathogenicity analysis of 219 signaling and metabolic pathways and 1620 gene ontology (GO) categories that defined the host's biosignatures to each infectious condition. Through this DBN computational approach, the method identified significantly perturbed pathways and GO category groups of genes that define the pathogenicity signatures of the infectious agent. Our preliminary results provide deeper understanding of the overall complexity of host innate immune response as well as the identification of host gene perturbations that defines a unique host temporal biosignature response to each pathogen. The application of advanced computational methods for developing interactome models based on DBNs has proven to be instrumental in elucidating novel host responses and improved functional biological insight into the host defensive mechanisms. Evaluating the unique differences in pathway and GO perturbations across pathogen conditions allowed the identification of plausible host-pathogen interaction mechanisms. Accordingly, a systems biology approach to study molecular pathway gene expression profiles of host cellular responses to microbial pathogens holds great promise as a methodology to identify, model and predict the overall dynamics of the host-pathogen interactome. Thus, we propose that such an approach has immediate application to the rational design of brucellosis and salmonellosis vaccines. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Design of Remote GPRS-based Gas Data Monitoring System

    NASA Astrophysics Data System (ADS)

    Yan, Xiyue; Yang, Jianhua; Lu, Wei

    2018-01-01

    In order to solve the problem of remote data transmission of gas flowmeter, and realize unattended operation on the spot, an unattended remote monitoring system based on GPRS for gas data is designed in this paper. The slave computer of this system adopts embedded microprocessor to read data of gas flowmeter through rs-232 bus and transfers it to the host computer through DTU. In the host computer, the VB program dynamically binds the Winsock control to receive and parse data. By using dynamic data exchange, the Kingview configuration software realizes history trend curve, real time trend curve, alarm, print, web browsing and other functions.

  14. Protecting software agents from malicious hosts using quantum computing

    NASA Astrophysics Data System (ADS)

    Reisner, John; Donkor, Eric

    2000-07-01

    We evaluate how quantum computing can be applied to security problems for software agents. Agent-based computing, which merges technological advances in artificial intelligence and mobile computing, is a rapidly growing domain, especially in applications such as electronic commerce, network management, information retrieval, and mission planning. System security is one of the more eminent research areas in agent-based computing, and the specific problem of protecting a mobile agent from a potentially hostile host is one of the most difficult of these challenges. In this work, we describe our agent model, and discuss the capabilities and limitations of classical solutions to the malicious host problem. Quantum computing may be extremely helpful in addressing the limitations of classical solutions to this problem. This paper highlights some of the areas where quantum computing could be applied to agent security.

  15. Definition and maintenance of a telemetry database dictionary

    NASA Technical Reports Server (NTRS)

    Knopf, William P. (Inventor)

    2007-01-01

    A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.

  16. An Intelligent Terminal for Access to a Medical Database

    PubMed Central

    Womble, M. E.; Wilson, S. D.; Keiser, H. N.; Tworek, M. L.

    1978-01-01

    Very powerful data base management systems (DBMS) now exist which allow medical personnel access to patient record data bases. DBMS's make it easy to retrieve either complete or abbreviated records of patients with similar characteristics. In addition, statistics on data base records are immediately accessible. However, the price of this power is a large computer with the inherent problems of access, response time, and reliability. If a general purpose, time-shared computer is used to get this power, the response time to a request can be either rapid or slow, depending upon loading by other users. Furthermore, if the computer is accessed via dial-up telephone lines, there is competition with other users for telephone ports. If either the DBMS or the host machine is replaced, the medical users, who are typically not sophisticated in computer usage, are forced to learn the new system. Microcomputers, because of their low cost and adaptability, lend themselves to a solution of these problems. A microprocessor-based intelligent terminal has been designed and implemented at the USAF School of Aerospace Medicine to provide a transparent interface between the user and his data base. The intelligent terminal system includes multiple microprocessors, floppy disks, a CRT terminal, and a printer. Users interact with the system at the CRT terminal using menu selection (framing). The system translates the menu selection into the query language of the DBMS and handles all actual communication with the DBMS and its host computer, including telephone dialing and sign on procedures, as well as the actual data base query and response. Retrieved information is stored locally for CRT display, hard copy production, and/or permanent retention. Microprocessor-based communication units provide security for sensitive medical data through encryption/decryption algorithms and high reliability error detection transmission schemes. Highly modular software design permits adapation to a different DBMS and/or host computer with only minor localized software changes. Importantly, this portability is completely transparent to system users. Although the terminal system is independent of the host computer and its DBMS, it has been linked to a UNIVAC 1108 computer supporting MRI's SYSTEM 2000 DBMS.

  17. Space lab system analysis

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Rives, T. B.

    1987-01-01

    An analytical analysis of the HOSC Generic Peripheral processing system was conducted. The results are summarized and they indicate that the maximum delay in performing screen change requests should be less than 2.5 sec., occurring for a slow VAX host to video screen I/O rate of 50 KBps. This delay is due to the average I/O rate from the video terminals to their host computer. Software structure of the main computers and the host computers will have greater impact on screen change or refresh response times. The HOSC data system model was updated by a newly coded PASCAL based simulation program which was installed on the HOSC VAX system. This model is described and documented. Suggestions are offered to fine tune the performance of the ETERNET interconnection network. Suggestions for using the Nutcracker by Excelan to trace itinerate packets which appear on the network from time to time were offered in discussions with the HOSC personnel. Several visits to the HOSC facility were to install and demonstrate the simulation model.

  18. Network Penetration Testing and Research

    NASA Technical Reports Server (NTRS)

    Murphy, Brandon F.

    2013-01-01

    This paper will focus the on research and testing done on penetrating a network for security purposes. This research will provide the IT security office new methods of attacks across and against a company's network as well as introduce them to new platforms and software that can be used to better assist with protecting against such attacks. Throughout this paper testing and research has been done on two different Linux based operating systems, for attacking and compromising a Windows based host computer. Backtrack 5 and BlackBuntu (Linux based penetration testing operating systems) are two different "attacker'' computers that will attempt to plant viruses and or NASA USRP - Internship Final Report exploits on a host Windows 7 operating system, as well as try to retrieve information from the host. On each Linux OS (Backtrack 5 and BlackBuntu) there is penetration testing software which provides the necessary tools to create exploits that can compromise a windows system as well as other operating systems. This paper will focus on two main methods of deploying exploits 1 onto a host computer in order to retrieve information from a compromised system. One method of deployment for an exploit that was tested is known as a "social engineering" exploit. This type of method requires interaction from unsuspecting user. With this user interaction, a deployed exploit may allow a malicious user to gain access to the unsuspecting user's computer as well as the network that such computer is connected to. Due to more advance security setting and antivirus protection and detection, this method is easily identified and defended against. The second method of exploit deployment is the method mainly focused upon within this paper. This method required extensive research on the best way to compromise a security enabled protected network. Once a network has been compromised, then any and all devices connected to such network has the potential to be compromised as well. With a compromised network, computers and devices can be penetrated through deployed exploits. This paper will illustrate the research done to test ability to penetrate a network without user interaction, in order to retrieve personal information from a targeted host.

  19. First 3D reconstruction of the rhizocephalan root system using MicroCT

    NASA Astrophysics Data System (ADS)

    Noever, Christoph; Keiler, Jonas; Glenner, Henrik

    2016-07-01

    Parasitic barnacles (Cirripedia: Rhizocephala) are highly specialized parasites of crustaceans. Instead of an alimentary tract for feeding they utilize a system of roots, which infiltrates the body of their hosts to absorb nutrients. Using X-ray micro computer tomography (MicroCT) and computer-aided 3D-reconstruction, we document the spatial organization of this root system, the interna, inside the intact host and also demonstrate its use for morphological examinations of the parasites reproductive part, the externa. This is the first 3D visualization of the unique root system of the Rhizocephala in situ, showing how it is related to the inner organs of the host. We investigated the interna from different parasitic barnacles of the family Peltogastridae, which are parasitic on anomuran crustaceans. Rhizocephalan parasites of pagurid hermit crabs and lithodid crabs were analysed in this study.

  20. Test-bench system for a borehole azimuthal acoustic reflection imaging logging tool

    NASA Astrophysics Data System (ADS)

    Liu, Xianping; Ju, Xiaodong; Qiao, Wenxiao; Lu, Junqiang; Men, Baiyong; Liu, Dong

    2016-06-01

    The borehole azimuthal acoustic reflection imaging logging tool (BAAR) is a new generation of imaging logging tool, which is able to investigate stratums in a relatively larger range of space around the borehole. The BAAR is designed based on the idea of modularization with a very complex structure, so it has become urgent for us to develop a dedicated test-bench system to debug each module of the BAAR. With the help of a test-bench system introduced in this paper, test and calibration of BAAR can be easily achieved. The test-bench system is designed based on the client/server model. The hardware system mainly consists of a host computer, an embedded controlling board, a bus interface board, a data acquisition board and a telemetry communication board. The host computer serves as the human machine interface and processes the uploaded data. The software running on the host computer is designed based on VC++. The embedded controlling board uses Advanced Reduced Instruction Set Machines 7 (ARM7) as the micro controller and communicates with the host computer via Ethernet. The software for the embedded controlling board is developed based on the operating system uClinux. The bus interface board, data acquisition board and telemetry communication board are designed based on a field programmable gate array (FPGA) and provide test interfaces for the logging tool. To examine the feasibility of the test-bench system, it was set up to perform a test on BAAR. By analyzing the test results, an unqualified channel of the electronic receiving cabin was discovered. It is suggested that the test-bench system can be used to quickly determine the working condition of sub modules of BAAR and it is of great significance in improving production efficiency and accelerating industrial production of the logging tool.

  1. Treecode with a Special-Purpose Processor

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    1991-08-01

    We describe an implementation of the modified Barnes-Hut tree algorithm for a gravitational N-body calculation on a GRAPE (GRAvity PipE) backend processor. GRAPE is a special-purpose computer for N-body calculations. It receives the positions and masses of particles from a host computer and then calculates the gravitational force at each coordinate specified by the host. To use this GRAPE processor with the hierarchical tree algorithm, the host computer must maintain a list of all nodes that exert force on a particle. If we create this list for each particle of the system at each timestep, the number of floating-point operations on the host and that on GRAPE would become comparable, and the increased speed obtained by using GRAPE would be small. In our modified algorithm, we create a list of nodes for many particles. Thus, the amount of the work required of the host is significantly reduced. This algorithm was originally developed by Barnes in order to vectorize the force calculation on a Cyber 205. With this algorithm, the computing time of the force calculation becomes comparable to that of the tree construction, if the GRAPE backend processor is sufficiently fast. The obtained speed-up factor is 30 to 50 for a RISC-based host computer and GRAPE-1A with a peak speed of 240 Mflops.

  2. Path scanning for the detection of anomalous subgraphs and use of DNS requests and host agents for anomaly/change detection and network situational awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neil, Joshua Charles; Fisk, Michael Edward; Brugh, Alexander William

    A system, apparatus, computer-readable medium, and computer-implemented method are provided for detecting anomalous behavior in a network. Historical parameters of the network are determined in order to determine normal activity levels. A plurality of paths in the network are enumerated as part of a graph representing the network, where each computing system in the network may be a node in the graph and the sequence of connections between two computing systems may be a directed edge in the graph. A statistical model is applied to the plurality of paths in the graph on a sliding window basis to detect anomalousmore » behavior. Data collected by a Unified Host Collection Agent ("UHCA") may also be used to detect anomalous behavior.« less

  3. Development of Labview based data acquisition and multichannel analyzer software for radioactive particle tracking system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, Nur Aira Abd, E-mail: nur-aira@nuclearmalaysia.gov.my; Yussup, Nolida; Ibrahim, Maslina Bt. Mohd

    2015-04-29

    A DAQ (data acquisition) software called RPTv2.0 has been developed for Radioactive Particle Tracking System in Malaysian Nuclear Agency. RPTv2.0 that features scanning control GUI, data acquisition from 12-channel counter via RS-232 interface, and multichannel analyzer (MCA). This software is fully developed on National Instruments Labview 8.6 platform. Ludlum Model 4612 Counter is used to count the signals from the scintillation detectors while a host computer is used to send control parameters, acquire and display data, and compute results. Each detector channel consists of independent high voltage control, threshold or sensitivity value and window settings. The counter is configured withmore » a host board and twelve slave boards. The host board collects the counts from each slave board and communicates with the computer via RS-232 data interface.« less

  4. High-speed, automatic controller design considerations for integrating array processor, multi-microprocessor, and host computer system architectures

    NASA Technical Reports Server (NTRS)

    Jacklin, S. A.; Leyland, J. A.; Warmbrodt, W.

    1985-01-01

    Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, online graphics, and file management. This paper discusses five global design considerations which are useful to integrate array processor, multimicroprocessor, and host computer system architectures into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the nonreal-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration is briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind tunnel environment, the controller architecture can generally be applied to a wide range of automatic control applications.

  5. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  6. Serial network simplifies the design of multiple microcomputer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Folkes, D.

    1981-01-01

    Recently there has been a lot of interest in developing network communication schemes for carrying digital data between locally distributed computing stations. Many of these schemes have focused on distributed networking techniques for data processing applications. These applications suggest the use of a serial, multipoint bus, where a number of remote intelligent units act as slaves to a central or host computer. Each slave would be serially addressable from the host and would perform required operations upon being addressed by the host. Based on an MK3873 single-chip microcomputer, the SCU 20 is designed to be such a remote slave device.more » The capabilities of the SCU 20 and its use in systems applications are examined.« less

  7. DNET: A communications facility for distributed heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Tole, John; Nagappan, S.; Clayton, J.; Ruotolo, P.; Williamson, C.; Solow, H.

    1989-01-01

    This document describes DNET, a heterogeneous data communications networking facility. DNET allows programs operating on hosts on dissimilar networks to communicate with one another without concern for computer hardware, network protocol, or operating system differences. The overall DNET network is defined as the collection of host machines/networks on which the DNET software is operating. Each underlying network is considered a DNET 'domain'. Data communications service is provided between any two processes on any two hosts on any of the networks (domains) that may be reached via DNET. DNET provides protocol transparent, reliable, streaming data transmission between hosts (restricted, initially to DECnet and TCP/IP networks). DNET also provides variable length datagram service with optional return receipts.

  8. Research into display sharing techniques for distributed computing environments

    NASA Technical Reports Server (NTRS)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  9. Ada Compiler Validation Summary Report: Certificate Number 890627W1. 10103 Harris Corporation, Computer Systems Division, Harris Ada, Version 5.0 Harris H1000

    DTIC Science & Technology

    1989-06-27

    Department of Defense Washington DC 20301-3081 Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate Number...890627W1.10103 Host: Harris HIOO0 under VOS, E.i Target: Harris HiO00 under VOS, E.1 Testing Completed June 27, 1989 using ACVC 1.10 This report has been...arris Corporation, Computer Systems Division Harris Ada, Version 5.0, Harris H1000 under VOS, 8.1 (Host & Target), Wright-Patterson AFB, ACVC 1.10 DD

  10. The Remote Analysis Station (RAS) as an instructional system

    NASA Technical Reports Server (NTRS)

    Rogers, R. H.; Wilson, C. L.; Dye, R. H.; Jaworski, E.

    1981-01-01

    "Hands-on" training in LANDSAT data analysis techniques can be obtained using a desk-top, interactive remote analysis station (RAS) which consists of a color CRT imagery display, with alphanumeric overwrite and keyboard, as well as a cursor controller and modem. This portable station can communicate via modem and dial-up telephone with a host computer at 1200 baud or it can be hardwired to a host computer at 9600 baud. A Z80 microcomputer controls the display refresh memory and remote station processing. LANDSAT data is displayed as three-band false-color imagery, one-band color-sliced imagery, or color-coded processed imagery. Although the display memory routinely operates at 256 x 256 picture elements, a display resolution of 128 x 128 can be selected to fill the display faster. In the false color mode the computer packs the data into one 8-bit character. When the host is not sending pictorial information the characters sent are in ordinary ASCII code. System capabilities are described.

  11. Combining high performance simulation, data acquisition, and graphics display computers

    NASA Technical Reports Server (NTRS)

    Hickman, Robert J.

    1989-01-01

    Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.

  12. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  13. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  14. The SAMPL4 host-guest blind prediction challenge: an overview.

    PubMed

    Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K

    2014-04-01

    Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.

  15. Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms

    PubMed Central

    Diamond, Alan; Nowotny, Thomas; Schmuker, Michael

    2016-01-01

    Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing. PMID:26778950

  16. Computational prediction of host-pathogen protein-protein interactions.

    PubMed

    Dyer, Matthew D; Murali, T M; Sobral, Bruno W

    2007-07-01

    Infectious diseases such as malaria result in millions of deaths each year. An important aspect of any host-pathogen system is the mechanism by which a pathogen can infect its host. One method of infection is via protein-protein interactions (PPIs) where pathogen proteins target host proteins. Developing computational methods that identify which PPIs enable a pathogen to infect a host has great implications in identifying potential targets for therapeutics. We present a method that integrates known intra-species PPIs with protein-domain profiles to predict PPIs between host and pathogen proteins. Given a set of intra-species PPIs, we identify the functional domains in each of the interacting proteins. For every pair of functional domains, we use Bayesian statistics to assess the probability that two proteins with that pair of domains will interact. We apply our method to the Homo sapiens-Plasmodium falciparum host-pathogen system. Our system predicts 516 PPIs between proteins from these two organisms. We show that pairs of human proteins we predict to interact with the same Plasmodium protein are close to each other in the human PPI network and that Plasmodium pairs predicted to interact with same human protein are co-expressed in DNA microarray datasets measured during various stages of the Plasmodium life cycle. Finally, we identify functionally enriched sub-networks spanned by the predicted interactions and discuss the plausibility of our predictions. Supplementary data are available at http://staff.vbi.vt.edu/dyermd/publications/dyer2007a.html. Supplementary data are available at Bioinformatics online.

  17. Investigating a holobiont: Microbiota perturbations and transkingdom networks.

    PubMed

    Greer, Renee; Dong, Xiaoxi; Morgun, Andrey; Shulzhenko, Natalia

    2016-01-01

    The scientific community has recently come to appreciate that, rather than existing as independent organisms, multicellular hosts and their microbiota comprise a complex evolving superorganism or metaorganism, termed a holobiont. This point of view leads to a re-evaluation of our understanding of different physiological processes and diseases. In this paper we focus on experimental and computational approaches which, when combined in one study, allowed us to dissect mechanisms (traditionally named host-microbiota interactions) regulating holobiont physiology. Specifically, we discuss several approaches for microbiota perturbation, such as use of antibiotics and germ-free animals, including advantages and potential caveats of their usage. We briefly review computational approaches to characterize the microbiota and, more importantly, methods to infer specific components of microbiota (such as microbes or their genes) affecting host functions. One such approach called transkingdom network analysis has been recently developed and applied in our study. (1) Finally, we also discuss common methods used to validate the computational predictions of host-microbiota interactions using in vitro and in vivo experimental systems.

  18. Design of Remote Monitoring System of Irrigation based on GSM and ZigBee Technology

    NASA Astrophysics Data System (ADS)

    Xiao xi, Zheng; Fang, Zhao; Shuaifei, Shao

    2018-03-01

    To solve the problems of low level of irrigation and waste of water resources, a remote monitoring system for farmland irrigation based on GSM communication technology and ZigBee technology was designed. The system is composed of sensors, GSM communication module, ZigBee module, host computer, valve and so on. The system detects and closes the pump and the electromagnetic valve according to the need of the system, and transmits the monitoring information to the host computer or the user’s Mobile phone through the GSM communication network. Experiments show that the system has low power consumption, friendly man-machine interface, convenient and simple. It can monitor agricultural environment remotely and control related irrigation equipment at any time and place, and can better meet the needs of remote monitoring of farmland irrigation.

  19. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    NASA Technical Reports Server (NTRS)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation.

  20. Designing a wearable navigation system for image-guided cancer resection surgery

    PubMed Central

    Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald

    2015-01-01

    A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure. PMID:24980159

  1. Designing a wearable navigation system for image-guided cancer resection surgery.

    PubMed

    Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald

    2014-11-01

    A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure.

  2. Modeling and Analyzing Intrusion Attempts to a Computer Network Operating in a Defense in Depth Posture

    DTIC Science & Technology

    2004-09-01

    protection. Firewalls, Intrusion Detection Systems (IDS’s), Anti-Virus (AV) software , and routers are such tools used. In recent years, computer security...associated with operating systems, application software , and computing hardware. When IDS’s are utilized on a host computer or network, there are two...primary approaches to detecting and / or preventing attacks. Traditional IDS’s, like most AV software , rely on known “signatures” to detect attacks

  3. Mobile Computer-Assisted-Instruction in Rural New Mexico.

    ERIC Educational Resources Information Center

    Gittinger, Jack D., Jr.

    The University of New Mexico's three-year Computer Assisted Instruction Project established one mobile and five permanent laboratories offering remedial and vocational instruction in winter, 1984-85. Each laboratory has a Degem learning system with minicomputer, teacher terminal, and 32 student terminals. A Digital PDP-11 host computer runs the…

  4. ARIES NDA Robot operators` manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheer, N.L.; Nelson, D.C.

    1998-05-01

    The ARIES NDA Robot is an automation device for servicing the material movements for a suite of Non-destructive assay (NDA) instruments. This suite of instruments includes a calorimeter, a gamma isotopic system, a segmented gamma scanner (SGS), and a neutron coincidence counter (NCC). Objects moved by the robot include sample cans, standard cans, and instrument plugs. The robot computer has an RS-232 connection with the NDA Host computer, which coordinates robot movements and instrument measurements. The instruments are expected to perform measurements under the direction of the Host without operator intervention. This user`s manual describes system startup, using the mainmore » menu, manual operation, and error recovery.« less

  5. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  6. Development of an Autonomous Navigation Technology Test Vehicle

    DTIC Science & Technology

    2004-08-01

    as an independent thread on processors using the Linux operating system. The computer hardware selected for the nodes that host the MRS threads...communications system design. Linux was chosen as the operating system for all of the single board computers used on the Mule. Linux was specifically...used for system analysis and development. The simple realization of multi-thread processing and inter-process communications in Linux made it a

  7. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    DTIC Science & Technology

    2015-06-01

    system accuracy. The AnRAD system was also generalized for the additional application of network intrusion detection . A self-structuring technique...to Host- based Intrusion Detection Systems using Contiguous and Discontiguous System Call Patterns,” IEEE Transactions on Computer, 63(4), pp. 807...square kilometer areas. The anomaly recognition and detection (AnRAD) system was built as a cogent confabulation network . It represented road

  8. Overview of the SAMPL5 host-guest challenge: Are we doing better?

    PubMed

    Yin, Jian; Henriksen, Niel M; Slochower, David R; Shirts, Michael R; Chiu, Michael W; Mobley, David L; Gilson, Michael K

    2017-01-01

    The ability to computationally predict protein-small molecule binding affinities with high accuracy would accelerate drug discovery and reduce its cost by eliminating rounds of trial-and-error synthesis and experimental evaluation of candidate ligands. As academic and industrial groups work toward this capability, there is an ongoing need for datasets that can be used to rigorously test new computational methods. Although protein-ligand data are clearly important for this purpose, their size and complexity make it difficult to obtain well-converged results and to troubleshoot computational methods. Host-guest systems offer a valuable alternative class of test cases, as they exemplify noncovalent molecular recognition but are far smaller and simpler. As a consequence, host-guest systems have been part of the prior two rounds of SAMPL prediction exercises, and they also figure in the present SAMPL5 round. In addition to being blinded, and thus avoiding biases that may arise in retrospective studies, the SAMPL challenges have the merit of focusing multiple researchers on a common set of molecular systems, so that methods may be compared and ideas exchanged. The present paper provides an overview of the host-guest component of SAMPL5, which centers on three different hosts, two octa-acids and a glycoluril-based molecular clip, and two different sets of guest molecules, in aqueous solution. A range of methods were applied, including electronic structure calculations with implicit solvent models; methods that combine empirical force fields with implicit solvent models; and explicit solvent free energy simulations. The most reliable methods tend to fall in the latter class, consistent with results in prior SAMPL rounds, but the level of accuracy is still below that sought for reliable computer-aided drug design. Advances in force field accuracy, modeling of protonation equilibria, electronic structure methods, and solvent models, hold promise for future improvements.

  9. RICIS research

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.

    1987-01-01

    The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.

  10. XpressWare Installation User guide

    NASA Astrophysics Data System (ADS)

    Duffey, K. P.

    XpressWare is a set of X terminal software, released by Tektronix Inc, that accommodates the X Window system on a range of host computers. The software comprises boot files (the X server image), configuration files, fonts, and font tools to support the X terminal. The files can be installed on one host or distributed across multiple hosts The purpose of this guide is to present the system or network administrator with a step-by-step account of how to install XpressWare, and how subsequently to configure the X terminals appropriately for the environment in which they operate.

  11. A review on computational systems biology of pathogen–host interactions

    PubMed Central

    Durmuş, Saliha; Çakır, Tunahan; Özgür, Arzucan; Guthke, Reinhard

    2015-01-01

    Pathogens manipulate the cellular mechanisms of host organisms via pathogen–host interactions (PHIs) in order to take advantage of the capabilities of host cells, leading to infections. The crucial role of these interspecies molecular interactions in initiating and sustaining infections necessitates a thorough understanding of the corresponding mechanisms. Unlike the traditional approach of considering the host or pathogen separately, a systems-level approach, considering the PHI system as a whole is indispensable to elucidate the mechanisms of infection. Following the technological advances in the post-genomic era, PHI data have been produced in large-scale within the last decade. Systems biology-based methods for the inference and analysis of PHI regulatory, metabolic, and protein–protein networks to shed light on infection mechanisms are gaining increasing demand thanks to the availability of omics data. The knowledge derived from the PHIs may largely contribute to the identification of new and more efficient therapeutics to prevent or cure infections. There are recent efforts for the detailed documentation of these experimentally verified PHI data through Web-based databases. Despite these advances in data archiving, there are still large amounts of PHI data in the biomedical literature yet to be discovered, and novel text mining methods are in development to unearth such hidden data. Here, we review a collection of recent studies on computational systems biology of PHIs with a special focus on the methods for the inference and analysis of PHI networks, covering also the Web-based databases and text-mining efforts to unravel the data hidden in the literature. PMID:25914674

  12. A Database of Computer Attacks for the Evaluation of Intrusion Detection Systems

    DTIC Science & Technology

    1999-06-01

    administrator whenever a system binary file (such as the ps, login , or ls program) is modified. Normal users have no legitimate reason to alter these files...development of EMERALD [46], which combines statistical anomaly detection from NIDES with signature verification. Specification-based intrusion detection...the creation of a single host that can act as many hosts. Daemons that provide network services—including telnetd, ftpd, and login — display banners

  13. COED Transactions, Vol. XI, No. 2, February 1979. A Student Designed Microcomputer Based Data Acquisition System.

    ERIC Educational Resources Information Center

    Mitchell, Eugene E., Ed.

    In context of an instrumentation course, four ocean engineering students set out to design and construct a micro-computer based data acquisition system that would be compatible with the University's CYBER host computer. The project included hardware design in the area of sampling, analog-to-digital conversion and timing coordination. It also…

  14. SCSI Communication Test Bus

    NASA Technical Reports Server (NTRS)

    Hua, Chanh V.; D'Ambrose, John J.; Jaworski, Richard C.; Halula, Elaine M.; Thornton, David N.; Heligman, Robert L.; Turner, Michael R.

    1990-01-01

    Small Computer System Interface (SCSI) communication test bus provides high-data-rate, standard interconnection enabling communication among International Business Machines (IBM) Personal System/2 Micro Channel, other devices connected to Micro Channel, test equipment, and host computer. Serves primarily as nonintrusive input/output attachment to PS/2 Micro Channel bus, providing rapid communication for debugger. Opens up possibility of using debugger in real-time applications.

  15. TMS communications hardware. Volume 1: Computer interfaces

    NASA Technical Reports Server (NTRS)

    Brown, J. S.; Weinrich, S. S.

    1979-01-01

    A prototpye coaxial cable bus communications system was designed to be used in the Trend Monitoring System (TMS) to connect intelligent graphics terminals (based around a Data General NOVA/3 computer) to a MODCOMP IV host minicomputer. The direct memory access (DMA) interfaces which were utilized for each of these computers are identified. It is shown that for the MODCOMP, an off-the-shell board was suitable, while for the NOVAs, custon interface circuitry was designed and implemented.

  16. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, G.P.; Skeate, M.F.

    1996-10-15

    An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

  17. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  18. Ground Software Maintenance Facility (GSMF) system manual

    NASA Technical Reports Server (NTRS)

    Derrig, D.; Griffith, G.

    1986-01-01

    The Ground Software Maintenance Facility (GSMF) is designed to support development and maintenance of spacelab ground support software. THE GSMF consists of a Perkin Elmer 3250 (Host computer) and a MITRA 125s (ATE computer), with appropriate interface devices and software to simulate the Electrical Ground Support Equipment (EGSE). This document is presented in three sections: (1) GSMF Overview; (2) Software Structure; and (3) Fault Isolation Capability. The overview contains information on hardware and software organization along with their corresponding block diagrams. The Software Structure section describes the modes of software structure including source files, link information, and database files. The Fault Isolation section describes the capabilities of the Ground Computer Interface Device, Perkin Elmer host, and MITRA ATE.

  19. Developing Critical L2 Digital Literacy through the Use of Computer-Based Internet-Hosted Learning Management Systems such as Moodle

    NASA Astrophysics Data System (ADS)

    Meurant, Robert C.

    Second Language (L2) Digital Literacy is of emerging importance within English as a Foreign Language (EFL) in Korea, and will evolve to become regarded as the most critical component of overall L2 English Literacy. Computer-based Internet-hosted Learning Management Systems (LMS), such as the popular open-source Moodle, are rapidly being adopted worldwide for distance education, and are also being applied to blended (hybrid) education. In EFL Education, they have a special potential: by setting the LMS to force English to be used exclusively throughout a course website, the meta-language can be made the target L2 language. Of necessity, students develop the ability to use English to navigate the Internet, access and contribute to online resources, and engage in computer-mediated communication. Through such pragmatic engagement with English, students significantly develop their L2 Digital Literacy.

  20. Ada Compiler Validation Summary Report: Certificate Number: 900121S1. 10251 Computer Sciences Corporation MC Ada V1.2.Beta/Concurrent Computer Corporation Concurrent/Masscomp 5600 Host To Concurrent/Masscomp 5600 (Dual 68020 Processor Configuration) Target

    DTIC Science & Technology

    1990-04-23

    developed Ada Real - Time Operating System (ARTOS) for bare machine environments(Target), ACW 1.1I0. " ; - -M.UIECTTERMS Ada programming language, Ada...configuration) Operating System: CSC developed Ada Real - Time Operating System (ARTOS) for bare machine environments Memory Size: 4MB 2.2...Test Method Testing of the MC Ado V1.2.beta/ Concurrent Computer Corporation compiler and the CSC developed Ada Real - Time Operating System (ARTOS) for

  1. Vector-Borne Pathogen and Host Evolution in a Structured Immuno-Epidemiological System.

    PubMed

    Gulbudak, Hayriye; Cannataro, Vincent L; Tuncer, Necibe; Martcheva, Maia

    2017-02-01

    Vector-borne disease transmission is a common dissemination mode used by many pathogens to spread in a host population. Similar to directly transmitted diseases, the within-host interaction of a vector-borne pathogen and a host's immune system influences the pathogen's transmission potential between hosts via vectors. Yet there are few theoretical studies on virulence-transmission trade-offs and evolution in vector-borne pathogen-host systems. Here, we consider an immuno-epidemiological model that links the within-host dynamics to between-host circulation of a vector-borne disease. On the immunological scale, the model mimics antibody-pathogen dynamics for arbovirus diseases, such as Rift Valley fever and West Nile virus. The within-host dynamics govern transmission and host mortality and recovery in an age-since-infection structured host-vector-borne pathogen epidemic model. By considering multiple pathogen strains and multiple competing host populations differing in their within-host replication rate and immune response parameters, respectively, we derive evolutionary optimization principles for both pathogen and host. Invasion analysis shows that the [Formula: see text] maximization principle holds for the vector-borne pathogen. For the host, we prove that evolution favors minimizing case fatality ratio (CFR). These results are utilized to compute host and pathogen evolutionary trajectories and to determine how model parameters affect evolution outcomes. We find that increasing the vector inoculum size increases the pathogen [Formula: see text], but can either increase or decrease the pathogen virulence (the host CFR), suggesting that vector inoculum size can contribute to virulence of vector-borne diseases in distinct ways.

  2. Video PATSEARCH: A Mixed-Media System.

    ERIC Educational Resources Information Center

    Schulman, Jacque-Lynne

    1982-01-01

    Describes a videodisc-based information display system in which a computer terminal is used to search the online PATSEARCH database from a remote host with local microcomputer control to select and display drawings from the retrieved records. System features and system components are discussed and criteria for system evaluation are presented.…

  3. MODELING HOST-PATHOGEN INTERACTIONS: COMPUTATIONAL BIOLOGY AND BIOINFORMATICS FOR INFECTIOUS DISEASE RESEARCH (Session introduction)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Jason E.; Braun, Pascal; Bonneau, Richard A.

    Pathogenic infections are a major cause of both human disease and loss of crop yields and animal stocks and thus cause immense damage to the worldwide economy. The significance of infectious diseases is expected to increase in an ever more connected warming world, in which new viral, bacterial and fungal pathogens can find novel hosts and ecologic niches. At the same time, the complex and sophisticated mechanisms by which diverse pathogenic agents evade defense mechanisms and subvert their hosts networks to suit their lifestyle needs is still very incompletely understood especially from a systems perspective [1]. Thus, understanding host-pathogen interactionsmore » is both an important and a scientifically fascinating topic. Recently, technology has offered the opportunity to investigate host-pathogen interactions on a level of detail and scope that offers immense computational and analytical possibilities. Genome sequencing was pioneered on some of these pathogens, and the number of strains and variants of pathogens sequenced to date vastly outnumbers the number of host genomes available. At the same time, for both plant and human hosts more and more data on population level genomic variation becomes available and offers a rich field for analysis into the genetic interactions between host and pathogen.« less

  4. Cloud GIS Based Watershed Management

    NASA Astrophysics Data System (ADS)

    Bediroğlu, G.; Colak, H. E.

    2017-11-01

    In this study, we generated a Cloud GIS based watershed management system with using Cloud Computing architecture. Cloud GIS is used as SAAS (Software as a Service) and DAAS (Data as a Service). We applied GIS analysis on cloud in terms of testing SAAS and deployed GIS datasets on cloud in terms of DAAS. We used Hybrid cloud computing model in manner of using ready web based mapping services hosted on cloud (World Topology, Satellite Imageries). We uploaded to system after creating geodatabases including Hydrology (Rivers, Lakes), Soil Maps, Climate Maps, Rain Maps, Geology and Land Use. Watershed of study area has been determined on cloud using ready-hosted topology maps. After uploading all the datasets to systems, we have applied various GIS analysis and queries. Results shown that Cloud GIS technology brings velocity and efficiency for watershed management studies. Besides this, system can be easily implemented for similar land analysis and management studies.

  5. Integrating Computer Architectures into the Design of High-Performance Controllers

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.; Leyland, Jane A.; Warmbrodt, William

    1986-01-01

    Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, on-line graphics, and file management. This paper discusses five global design considerations that are useful to integrate array processor, multimicroprocessor, and host computer system architecture into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the non-real-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration will be briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind-tunnel environment, the control architecture can generally be applied to a wide range of automatic control applications.

  6. Computer simulation for integrated pest management of spruce budworms

    Treesearch

    Carroll B. Williams; Patrick J. Shea

    1982-01-01

    Some field studies of the effects of various insecticides on the spruce budworm (Choristoneura sp.) and their parasites have shown severe suppression of host (budworm) populations and increased parasitism after treatment. Computer simulation using hypothetical models of spruce budworm-parasite systems based on these field data revealed that (1)...

  7. Microcontroller interface for diode array spectrometry

    NASA Astrophysics Data System (ADS)

    Aguo, L.; Williams, R. R.

    An alternative to bus-based computer interfacing is presented using diode array spectrometry as a typical application. The new interface consists of an embedded single-chip microcomputer, known as a microcontroller, which provides all necessary digital I/O and analog-to-digital conversion (ADC) along with an unprecedented amount of intelligence. Communication with a host computer system is accomplished by a standard serial interface so this type of interfacing is applicable to a wide range of personal and minicomputers and can be easily networked. Data are acquired asynchronousty and sent to the host on command. New operating modes which have no traditional counterparts are presented.

  8. A high speed buffer for LV data acquisition

    NASA Technical Reports Server (NTRS)

    Cavone, Angelo A.; Sterlina, Patrick S.; Clemmons, James I., Jr.; Meyers, James F.

    1987-01-01

    The laser velocimeter (autocovariance) buffer interface is a data acquisition subsystem designed specifically for the acquisition of data from a laser velocimeter. The subsystem acquires data from up to six laser velocimeter components in parallel, measures the times between successive data points for each of the components, establishes and maintains a coincident condition between any two or three components, and acquires data from other instrumentation systems simultaneously with the laser velocimeter data points. The subsystem is designed to control the entire data acquisition process based on initial setup parameters obtained from a host computer and to be independent of the computer during the acquisition. On completion of the acquisition cycle, the interface transfers the contents of its memory to the host under direction of the host via a single 16-bit parallel DMA channel.

  9. Beyond Hosting Capacity: Using Shortest Path Methods to Minimize Upgrade Cost Pathways: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gensollen, Nicolas; Horowitz, Kelsey A; Palmintier, Bryan S

    We present in this paper a graph based forwardlooking algorithm applied to distribution planning in the context of distributed PV penetration. We study the target hosting capacity (THC) problem where the objective is to find the cheapest sequence of system upgrades to reach a predefined hosting capacity target value. We show in this paper that commonly used short-term cost minimization approaches lead most of the time to suboptimal solutions. By comparing our method against such myopic techniques on real distribution systems, we show that our algorithm is able to reduce the overall integration costs by looking at future decisions. Becausemore » hosting capacity is hard to compute, this problem requires efficient methods to search the space. We demonstrate here that heuristics using domain specific knowledge can be efficiently used to improve the algorithm performance such that real distribution systems can be studied.« less

  10. Non-systemic transmission of tick-borne diseases: A network approach

    NASA Astrophysics Data System (ADS)

    Ferreri, Luca; Bajardi, Paolo; Giacobini, Mario

    2016-10-01

    Tick-borne diseases can be transmitted via non-systemic (NS) transmission. This occurs when tick gets the infection by co-feeding with infected ticks on the same host resulting in a direct pathogen transmission between the vectors, without infecting the host. This transmission is peculiar, as it does not require any systemic infection of the host. The NS transmission is the main efficient transmission for the persistence of the tick-borne encephalitis virus in nature. By describing the heterogeneous ticks aggregation on hosts through a bipartite graphs representation, we are able to mathematically define the NS transmission and to depict the epidemiological conditions for the pathogen persistence. Despite the fact that the underlying network is largely fragmented, analytical and computational results show that the larger is the variability of the aggregation, and the easier is for the pathogen to persist in the population.

  11. Agent-based dynamic knowledge representation of Pseudomonas aeruginosa virulence activation in the stressed gut: Towards characterizing host-pathogen interactions in gut-derived sepsis.

    PubMed

    Seal, John B; Alverdy, John C; Zaborina, Olga; An, Gary

    2011-09-19

    There is a growing realization that alterations in host-pathogen interactions (HPI) can generate disease phenotypes without pathogen invasion. The gut represents a prime region where such HPI can arise and manifest. Under normal conditions intestinal microbial communities maintain a stable, mutually beneficial ecosystem. However, host stress can lead to changes in environmental conditions that shift the nature of the host-microbe dialogue, resulting in escalation of virulence expression, immune activation and ultimately systemic disease. Effective modulation of these dynamics requires the ability to characterize the complexity of the HPI, and dynamic computational modeling can aid in this task. Agent-based modeling is a computational method that is suited to representing spatially diverse, dynamical systems. We propose that dynamic knowledge representation of gut HPI with agent-based modeling will aid in the investigation of the pathogenesis of gut-derived sepsis. An agent-based model (ABM) of virulence regulation in Pseudomonas aeruginosa was developed by translating bacterial and host cell sense-and-response mechanisms into behavioral rules for computational agents and integrated into a virtual environment representing the host-microbe interface in the gut. The resulting gut milieu ABM (GMABM) was used to: 1) investigate a potential clinically relevant laboratory experimental condition not yet developed--i.e. non-lethal transient segmental intestinal ischemia, 2) examine the sufficiency of existing hypotheses to explain experimental data--i.e. lethality in a model of major surgical insult and stress, and 3) produce behavior to potentially guide future experimental design--i.e. suggested sample points for a potential laboratory model of non-lethal transient intestinal ischemia. Furthermore, hypotheses were generated to explain certain discrepancies between the behaviors of the GMABM and biological experiments, and new investigatory avenues proposed to test those hypotheses. Agent-based modeling can account for the spatio-temporal dynamics of an HPI, and, even when carried out with a relatively high degree of abstraction, can be useful in the investigation of system-level consequences of putative mechanisms operating at the individual agent level. We suggest that an integrated and iterative heuristic relationship between computational modeling and more traditional laboratory and clinical investigations, with a focus on identifying useful and sufficient degrees of abstraction, will enhance the efficiency and translational productivity of biomedical research.

  12. Agent-based dynamic knowledge representation of Pseudomonas aeruginosa virulence activation in the stressed gut: Towards characterizing host-pathogen interactions in gut-derived sepsis

    PubMed Central

    2011-01-01

    Background There is a growing realization that alterations in host-pathogen interactions (HPI) can generate disease phenotypes without pathogen invasion. The gut represents a prime region where such HPI can arise and manifest. Under normal conditions intestinal microbial communities maintain a stable, mutually beneficial ecosystem. However, host stress can lead to changes in environmental conditions that shift the nature of the host-microbe dialogue, resulting in escalation of virulence expression, immune activation and ultimately systemic disease. Effective modulation of these dynamics requires the ability to characterize the complexity of the HPI, and dynamic computational modeling can aid in this task. Agent-based modeling is a computational method that is suited to representing spatially diverse, dynamical systems. We propose that dynamic knowledge representation of gut HPI with agent-based modeling will aid in the investigation of the pathogenesis of gut-derived sepsis. Methodology/Principal Findings An agent-based model (ABM) of virulence regulation in Pseudomonas aeruginosa was developed by translating bacterial and host cell sense-and-response mechanisms into behavioral rules for computational agents and integrated into a virtual environment representing the host-microbe interface in the gut. The resulting gut milieu ABM (GMABM) was used to: 1) investigate a potential clinically relevant laboratory experimental condition not yet developed - i.e. non-lethal transient segmental intestinal ischemia, 2) examine the sufficiency of existing hypotheses to explain experimental data - i.e. lethality in a model of major surgical insult and stress, and 3) produce behavior to potentially guide future experimental design - i.e. suggested sample points for a potential laboratory model of non-lethal transient intestinal ischemia. Furthermore, hypotheses were generated to explain certain discrepancies between the behaviors of the GMABM and biological experiments, and new investigatory avenues proposed to test those hypotheses. Conclusions/Significance Agent-based modeling can account for the spatio-temporal dynamics of an HPI, and, even when carried out with a relatively high degree of abstraction, can be useful in the investigation of system-level consequences of putative mechanisms operating at the individual agent level. We suggest that an integrated and iterative heuristic relationship between computational modeling and more traditional laboratory and clinical investigations, with a focus on identifying useful and sufficient degrees of abstraction, will enhance the efficiency and translational productivity of biomedical research. PMID:21929759

  13. DBSecSys 2.0: a database of Burkholderia mallei and Burkholderia pseudomallei secretion systems.

    PubMed

    Memišević, Vesna; Kumar, Kamal; Zavaljevski, Nela; DeShazer, David; Wallqvist, Anders; Reifman, Jaques

    2016-09-20

    Burkholderia mallei and B. pseudomallei are the causative agents of glanders and melioidosis, respectively, diseases with high morbidity and mortality rates. B. mallei and B. pseudomallei are closely related genetically; B. mallei evolved from an ancestral strain of B. pseudomallei by genome reduction and adaptation to an obligate intracellular lifestyle. Although these two bacteria cause different diseases, they share multiple virulence factors, including bacterial secretion systems, which represent key components of bacterial pathogenicity. Despite recent progress, the secretion system proteins for B. mallei and B. pseudomallei, their pathogenic mechanisms of action, and host factors are not well characterized. We previously developed a manually curated database, DBSecSys, of bacterial secretion system proteins for B. mallei. Here, we report an expansion of the database with corresponding information about B. pseudomallei. DBSecSys 2.0 contains comprehensive literature-based and computationally derived information about B. mallei ATCC 23344 and literature-based and computationally derived information about B. pseudomallei K96243. The database contains updated information for 163 B. mallei proteins from the previous database and 61 additional B. mallei proteins, and new information for 281 B. pseudomallei proteins associated with 5 secretion systems, their 1,633 human- and murine-interacting targets, and 2,400 host-B. mallei interactions and 2,286 host-B. pseudomallei interactions. The database also includes information about 13 pathogenic mechanisms of action for B. mallei and B. pseudomallei secretion system proteins inferred from the available literature or computationally. Additionally, DBSecSys 2.0 provides details about 82 virulence attenuation experiments for 52 B. mallei secretion system proteins and 98 virulence attenuation experiments for 61 B. pseudomallei secretion system proteins. We updated the Web interface and data access layer to speed-up users' search of detailed information for orthologous proteins related to secretion systems of the two pathogens. The updates of DBSecSys 2.0 provide unique capabilities to access comprehensive information about secretion systems of B. mallei and B. pseudomallei. They enable studies and comparisons of corresponding proteins of these two closely related pathogens and their host-interacting partners. The database is available at http://dbsecsys.bhsai.org .

  14. Lewis Structures Technology, 1988. Volume 3: Structural Integrity Fatigue and Fracture Wind Turbines HOST

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The charter of the Structures Division is to perform and disseminate results of research conducted in support of aerospace engine structures. These results have a wide range of applicability to practioners of structural engineering mechanics beyond the aerospace arena. The specific purpose of the symposium was to familiarize the engineering structures community with the depth and range of research performed by the division and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive evaluation, constitutive models and experimental capabilities, dynamic systems, fatigue and damage, wind turbines, hot section technology (HOST), aeroelasticity, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics, and structural mechanics computer codes.

  15. An imaging system for PLIF/Mie measurements for a combusting flow

    NASA Technical Reports Server (NTRS)

    Wey, C. C.; Ghorashi, B.; Marek, C. J.; Wey, C.

    1990-01-01

    The equipment required to establish an imaging system can be divided into four parts: (1) the light source and beam shaping optics; (2) camera and recording; (3) image acquisition and processing; and (4) computer and output systems. A pulsed, Nd:YAG-pummped, frequency-doubled dye laser which can freeze motion in the flowfield is used for an illumination source. A set of lenses is used to form the laser beam into a sheet. The induced fluorescence is collected by an UV-enhanced lens and passes through an UV-enhanced microchannel plate intensifier which is optically coupled to a gated solid state CCD camera. The output of the camera is simultaneously displayed on a monitor and recorded on either a laser videodisc set of a Super VHS VCR. This videodisc set is controlled by a minicomputer via a connection to the RS-232C interface terminals. The imaging system is connected to the host computer by a bus repeater and can be multiplexed between four video input sources. Sample images from a planar shear layer experiment are presented to show the processing capability of the imaging system with the host computer.

  16. The engineering of a scalable multi-site communications system utilizing quantum key distribution (QKD)

    NASA Astrophysics Data System (ADS)

    Tysowski, Piotr K.; Ling, Xinhua; Lütkenhaus, Norbert; Mosca, Michele

    2018-04-01

    Quantum key distribution (QKD) is a means of generating keys between a pair of computing hosts that is theoretically secure against cryptanalysis, even by a quantum computer. Although there is much active research into improving the QKD technology itself, there is still significant work to be done to apply engineering methodology and determine how it can be practically built to scale within an enterprise IT environment. Significant challenges exist in building a practical key management service (KMS) for use in a metropolitan network. QKD is generally a point-to-point technique only and is subject to steep performance constraints. The integration of QKD into enterprise-level computing has been researched, to enable quantum-safe communication. A novel method for constructing a KMS is presented that allows arbitrary computing hosts on one site to establish multiple secure communication sessions with the hosts of another site. A key exchange protocol is proposed where symmetric private keys are granted to hosts while satisfying the scalability needs of an enterprise population of users. The KMS operates within a layered architectural style that is able to interoperate with various underlying QKD implementations. Variable levels of security for the host population are enforced through a policy engine. A network layer provides key generation across a network of nodes connected by quantum links. Scheduling and routing functionality allows quantum key material to be relayed across trusted nodes. Optimizations are performed to match the real-time host demand for key material with the capacity afforded by the infrastructure. The result is a flexible and scalable architecture that is suitable for enterprise use and independent of any specific QKD technology.

  17. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing

    PubMed Central

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P.; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique. PMID:28085932

  18. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    PubMed

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  19. Computer systems

    NASA Technical Reports Server (NTRS)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  20. The Workstation Approach to Laboratory Computing

    PubMed Central

    Crosby, P.A.; Malachowski, G.C.; Hall, B.R.; Stevens, V.; Gunn, B.J.; Hudson, S.; Schlosser, D.

    1985-01-01

    There is a need for a Laboratory Workstation which specifically addresses the problems associated with computing in the scientific laboratory. A workstation based on the IBM PC architecture and including a front end data acquisition system which communicates with a host computer via a high speed communications link; a new graphics display controller with hardware window management and window scrolling; and an integrated software package is described.

  1. NCC Simulation Model: Simulating the operations of the network control center, phase 2

    NASA Technical Reports Server (NTRS)

    Benjamin, Norman M.; Paul, Arthur S.; Gill, Tepper L.

    1992-01-01

    The simulation of the network control center (NCC) is in the second phase of development. This phase seeks to further develop the work performed in phase one. Phase one concentrated on the computer systems and interconnecting network. The focus of phase two will be the implementation of the network message dialogues and the resources controlled by the NCC. These resources are requested, initiated, monitored and analyzed via network messages. In the NCC network messages are presented in the form of packets that are routed across the network. These packets are generated, encoded, decoded and processed by the network host processors that generate and service the message traffic on the network that connects these hosts. As a result, the message traffic is used to characterize the work done by the NCC and the connected network. Phase one of the model development represented the NCC as a network of bi-directional single server queues and message generating sources. The generators represented the external segment processors. The served based queues represented the host processors. The NCC model consists of the internal and external processors which generate message traffic on the network that links these hosts. To fully realize the objective of phase two it is necessary to identify and model the processes in each internal processor. These processes live in the operating system of the internal host computers and handle tasks such as high speed message exchanging, ISN and NFE interface, event monitoring, network monitoring, and message logging. Inter process communication is achieved through the operating system facilities. The overall performance of the host is determined by its ability to service messages generated by both internal and external processors.

  2. Potential of minicomputer/array-processor system for nonlinear finite-element analysis

    NASA Technical Reports Server (NTRS)

    Strohkorb, G. A.; Noor, A. K.

    1983-01-01

    The potential of using a minicomputer/array-processor system for the efficient solution of large-scale, nonlinear, finite-element problems is studied. A Prime 750 is used as the host computer, and a software simulator residing on the Prime is employed to assess the performance of the Floating Point Systems AP-120B array processor. Major hardware characteristics of the system such as virtual memory and parallel and pipeline processing are reviewed, and the interplay between various hardware components is examined. Effective use of the minicomputer/array-processor system for nonlinear analysis requires the following: (1) proper selection of the computational procedure and the capability to vectorize the numerical algorithms; (2) reduction of input-output operations; and (3) overlapping host and array-processor operations. A detailed discussion is given of techniques to accomplish each of these tasks. Two benchmark problems with 1715 and 3230 degrees of freedom, respectively, are selected to measure the anticipated gain in speed obtained by using the proposed algorithms on the array processor.

  3. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-02-29

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.

  4. Encoder fault analysis system based on Moire fringe error signal

    NASA Astrophysics Data System (ADS)

    Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu

    2018-02-01

    Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.

  5. Architectures for Device Aware Network

    DTIC Science & Technology

    2005-03-01

    68 b. PDA in DAN Mode ............................................................. 69 c. Cell Phone in DAN Mode...68 Figure 15. PDA in DAN Mode - Reduced Resolution Image ..................................... 69 Figure 16. Cell Phone in DAN Mode -No Image...computer, notebook computer, cell phone and a host of networked embedded systems) may have extremely differing capabilities and resources to retrieve and

  6. Systems Biology Approaches for Host–Fungal Interactions: An Expanding Multi-Omics Frontier

    PubMed Central

    Culibrk, Luka; Croft, Carys A.

    2016-01-01

    Abstract Opportunistic fungal infections are an increasing threat for global health, and for immunocompromised patients in particular. These infections are characterized by interaction between fungal pathogen and host cells. The exact mechanisms and the attendant variability in host and fungal pathogen interaction remain to be fully elucidated. The field of systems biology aims to characterize a biological system, and utilize this knowledge to predict the system's response to stimuli such as fungal exposures. A multi-omics approach, for example, combining data from genomics, proteomics, metabolomics, would allow a more comprehensive and pan-optic “two systems” biology of both the host and the fungal pathogen. In this review and literature analysis, we present highly specialized and nascent methods for analysis of multiple -omes of biological systems, in addition to emerging single-molecule visualization techniques that may assist in determining biological relevance of multi-omics data. We provide an overview of computational methods for modeling of gene regulatory networks, including some that have been applied towards the study of an interacting host and pathogen. In sum, comprehensive characterizations of host–fungal pathogen systems are now possible, and utilization of these cutting-edge multi-omics strategies may yield advances in better understanding of both host biology and fungal pathogens at a systems scale. PMID:26885725

  7. Computer Security Products Technology Overview

    DTIC Science & Technology

    1988-10-01

    13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a

  8. Interfacing the VAX 11/780 Using Berkeley Unix 4.2.BSD and Ethernet Based Xerox Network Systems. Volume 1.

    DTIC Science & Technology

    1984-12-01

    3Com Corporation ....... A-18 Ethernet Controller Support . . . . . . A-19 Host Systems Support . . . . . . . . . A-20 Personal Computers Support...A-23 VAX EtherSeries Software 0 * A-23 Network Research Corporation . o o o . o A-24 File Transfer Service . . . . o A-25 Virtual Terminal Service 0...Control office is planning to acquire a Digital Equipment Corporation VAX 11/780 mainframe computer with the Unix Berkeley 4.2BSD operating system. They

  9. CMG-biotools, a free workbench for basic comparative microbial genomics.

    PubMed

    Vesth, Tammi; Lagesen, Karin; Acar, Öncel; Ussery, David

    2013-01-01

    Today, there are more than a hundred times as many sequenced prokaryotic genomes than were present in the year 2000. The economical sequencing of genomic DNA has facilitated a whole new approach to microbial genomics. The real power of genomics is manifested through comparative genomics that can reveal strain specific characteristics, diversity within species and many other aspects. However, comparative genomics is a field not easily entered into by scientists with few computational skills. The CMG-biotools package is designed for microbiologists with limited knowledge of computational analysis and can be used to perform a number of analyses and comparisons of genomic data. The CMG-biotools system presents a stand-alone interface for comparative microbial genomics. The package is a customized operating system, based on Xubuntu 10.10, available through the open source Ubuntu project. The system can be installed on a virtual computer, allowing the user to run the system alongside any other operating system. Source codes for all programs are provided under GNU license, which makes it possible to transfer the programs to other systems if so desired. We here demonstrate the package by comparing and analyzing the diversity within the class Negativicutes, represented by 31 genomes including 10 genera. The analyses include 16S rRNA phylogeny, basic DNA and codon statistics, proteome comparisons using BLAST and graphical analyses of DNA structures. This paper shows the strength and diverse use of the CMG-biotools system. The system can be installed on a vide range of host operating systems and utilizes as much of the host computer as desired. It allows the user to compare multiple genomes, from various sources using standardized data formats and intuitive visualizations of results. The examples presented here clearly shows that users with limited computational experience can perform complicated analysis without much training.

  10. The DOE/NASA wind turbine data acquisition system. Part 3: Unattended power performance monitor

    NASA Technical Reports Server (NTRS)

    Halleyy, A.; Heidkamp, D.; Neustadter, H.; Olson, R.

    1983-01-01

    Software documentation, operational procedures, and diagnostic instructions for development version of an unattended wind turbine performance monitoring system is provided. Designed to be used for off line intelligent data acquisition in conjunction with the central host computer.

  11. Computer Program Development Specification for Ada Integrated Environment: KAPSE (Kernel Ada Programming Support Environment)/Database, Type B5, B5-AIE(1).KAPSE(1).

    DTIC Science & Technology

    1982-11-12

    File 1/0 Prgram Invocation Other Access M and Control Services KAPSE/Host Interface most Operating System Peripherals/ 01 su ?eetworks 6282318-2 Figure 3...3.2.4.3.8.5 Transitory Windows The TRANSITORY flag is used to prevent permanent dependence on temporary windows created simply for focusing on a part of the...KAPSE/Tool interfaces in terms of these low-level host-independent interfaces. In addition, the KAPSE/Host interface packages prevent the application

  12. Method for redesign of microbial production systems

    DOEpatents

    Maranas, Costas D.; Burgard, Anthony P.; Pharkya, Priti

    2010-11-02

    A computer-assisted method for identifying functionalities to add to an organism-specific metabolic network to enable a desired biotransformation in a host includes accessing reactions from a universal database to provide stoichiometric balance, identifying at least one stoichiometrically balanced pathway at least partially based on the reactions and a substrate to minimize a number of non-native functionalities in the production host, and incorporating the at least one stoichiometrically balanced pathway into the host to provide the desired biotransformation. A representation of the metabolic network as modified can be stored.

  13. Method for redesign of microbial production systems

    DOEpatents

    Maranas, Costas D [State College, PA; Burgard, Anthony P [San Diego, CA; Pharkya, Priti [San Diego, CA

    2012-01-31

    A computer-assisted method for identifying functionalities to add to an organism-specific metabolic network to enable a desired biotransformation in a host includes accessing reactions from a universal database to provide stoichiometric balance, identifying at least one stoichiometrically balanced pathway at least partially based on the reactions and a substrate to minimize a number of non-native functionalities in the production host, and incorporating the at least one stoichiometrically balanced pathway into the host to provide the desired biotransformation. A representation of the metabolic network as modified can be stored.

  14. In-Storage Embedded Accelerator for Sparse Pattern Processing

    DTIC Science & Technology

    2016-09-13

    computation . As a result, a very small processor could be used and still make full use of storage device bandwidth. When the host software sends...Rean Griffith, Anthony D. Joseph, Randy Katz, Andy Konwinski, Gunho Lee et al. "A view of cloud computing ."Communications of the ACM 53, no. 4 (2010...Laboratory, * MIT Computer Science & Artificial Intelligence Laboratory Abstract— We present a novel system architecture for sparse pattern

  15. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    NASA Astrophysics Data System (ADS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  16. Propulsion/flight control integration technology (PROFIT) software system definition

    NASA Technical Reports Server (NTRS)

    Carlin, C. M.; Hastings, W. J.

    1978-01-01

    The Propulsion Flight Control Integration Technology (PROFIT) program is designed to develop a flying testbed dedicated to controls research. The control software for PROFIT is defined. Maximum flexibility, needed for long term use of the flight facility, is achieved through a modular design. The Host program, processes inputs from the telemetry uplink, aircraft central computer, cockpit computer control and plant sensors to form an input data base for use by the control algorithms. The control algorithms, programmed as application modules, process the input data to generate an output data base. The Host program formats the data for output to the telemetry downlink, the cockpit computer control, and the control effectors. Two applications modules are defined - the bill of materials F-100 engine control and the bill of materials F-15 inlet control.

  17. Design of the intelligent smoke alarm system based on photoelectric smoke

    NASA Astrophysics Data System (ADS)

    Ma, Jiangfei; Yang, Xiufang; Wang, Peipei

    2017-02-01

    This paper designed a kind of intelligent smoke alarm system based on photoelectric smoke detector and temperature, The system takes AT89C51 MCU as the core of hardware control and Labview as the host computer monitoring center.The sensor system acquires temperature signals and smoke signals, the MCU control A/D by Sampling and converting the output analog signals , and then the two signals will be uploaded to the host computer through the serial communication. To achieve real-time monitoring of smoke and temperature in the environment, LabVIEW monitoring platform need to hold, process, analysis and display these samping signals. The intelligent smoke alarm system is suitable for large scale shopping malls and other public places, which can greatly reduce the false alarm rate of fire, The experimental results show that the system runs well and can alarm when the setting threshold is reached,and the threshold parameters can be adjusted according to the actual conditions of the field. The system is easy to operate, simple in structure, intelligent, low cost, and with strong practical value.

  18. Cooperativity and complexity in the binding of anions and cations to a tetratopic ion-pair host.

    PubMed

    Howe, Ethan N W; Bhadbhade, Mohan; Thordarson, Pall

    2014-05-21

    Cooperative interactions play a very important role in both natural and synthetic supramolecular systems. We report here on the cooperative binding properties of a tetratopic ion-pair host 1. This host combines two isophthalamide anion recognition sites with two unusual "half-crown/two carbonyl" cation recognition sites as revealed by the combination of single-crystal X-ray analysis of the free host and the 1:2 host:calcium cation complex, together with two-dimensional NMR and computational studies. By systematically comparing all of the binding data to several possible binding models and focusing on four different variants of the 1:2 binding model, it was in most cases possible to quantify these complex cooperative interactions. The data showed strong negative cooperativity (α = 0.01-0.05) of 1 toward chloride and acetate anions, while for cations the results were more variable. Interestingly, in the competitive (CDCl3/CD3OD (9:1, v/v)) solvent, the addition of calcium cations to the tetratopic ion-pair host 1 allosterically switched "on" chloride binding that is otherwise not present in this solvent system. The insight into the complexity of cooperative interactions revealed in this study of the tetratopic ion-pair host 1 can be used to design better cooperative supramolecular systems for information transfer and catalysis.

  19. Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 1B: Concise review

    NASA Technical Reports Server (NTRS)

    Miller, R. E., Jr.; Southall, J. W.; Kawaguchi, A. S.; Redhed, D. D.

    1973-01-01

    Reports on the design process, support of the design process, IPAD System design catalog of IPAD technical program elements, IPAD System development and operation, and IPAD benefits and impact are concisely reviewed. The approach used to define the design is described. Major activities performed during the product development cycle are identified. The computer system requirements necessary to support the design process are given as computational requirements of the host system, technical program elements and system features. The IPAD computer system design is presented as concepts, a functional description and an organizational diagram of its major components. The cost and schedules and a three phase plan for IPAD implementation are presented. The benefits and impact of IPAD technology are discussed.

  20. RF optics study for DSS-43 ultracone implementation

    NASA Technical Reports Server (NTRS)

    Lee, P.; Veruttipong, W.

    1994-01-01

    The Ultracone feed system will be implemented on DSS 43 to support the S-band (2.3 GHz) Galileo contingency mission. The feed system will be installed in the host country's cone, which is normally used for radio astronomy, VLBI, and holography. The design must retain existing radio-astronomy capabilities, which could be impaired by shadowing from the large S-band feed horn. Computer calculations were completed to estimate system performance and shadowing effects for various configurations of the host country's cone feed systems. Also, the DSS-43 system performance using higher gain S-band horns was analyzed. A new S-band horn design with improved return loss and cross-polarization characteristics is presented.

  1. Computational approaches to metabolic engineering utilizing systems biology and synthetic biology.

    PubMed

    Fong, Stephen S

    2014-08-01

    Metabolic engineering modifies cellular function to address various biochemical applications. Underlying metabolic engineering efforts are a host of tools and knowledge that are integrated to enable successful outcomes. Concurrent development of computational and experimental tools has enabled different approaches to metabolic engineering. One approach is to leverage knowledge and computational tools to prospectively predict designs to achieve the desired outcome. An alternative approach is to utilize combinatorial experimental tools to empirically explore the range of cellular function and to screen for desired traits. This mini-review focuses on computational systems biology and synthetic biology tools that can be used in combination for prospective in silico strain design.

  2. Dynamic Transfers Of Tasks Among Computers

    NASA Technical Reports Server (NTRS)

    Liu, Howard T.; Silvester, John A.

    1989-01-01

    Allocation scheme gives jobs to idle computers. Ideal resource-sharing algorithm should have following characteristics: Dynamics, decentralized, and heterogeneous. Proposed enhanced receiver-initiated dynamic algorithm (ERIDA) for resource sharing fulfills all above criteria. Provides method balancing workload among hosts, resulting in improvement in response time and throughput performance of total system. Adjusts dynamically to traffic load of each station.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    KURTZER, GREGORY; MURIKI, KRISHNA

    Singularity is a container solution designed to facilitate mobility of compute across systems and HPC infrastructures. It does this by creating minimal containers that are defined by a specfile and files from the host system are used to build the container. The resulting container can then be launched by any Linux computer with Singularity installed regardless if the programs inside the container are present on the target system, or if they are a different version, or even incompatible versions. Singularity achieves extreme portability without sacrificing usability thus solving the need of mobility of compute. Singularity containers can be executed withinmore » a normal/standard command line process flow.« less

  4. Facing the challenges of multiscale modelling of bacterial and fungal pathogen–host interactions

    PubMed Central

    Schleicher, Jana; Conrad, Theresia; Gustafsson, Mika; Cedersund, Gunnar; Guthke, Reinhard

    2017-01-01

    Abstract Recent and rapidly evolving progress on high-throughput measurement techniques and computational performance has led to the emergence of new disciplines, such as systems medicine and translational systems biology. At the core of these disciplines lies the desire to produce multiscale models: mathematical models that integrate multiple scales of biological organization, ranging from molecular, cellular and tissue models to organ, whole-organism and population scale models. Using such models, hypotheses can systematically be tested. In this review, we present state-of-the-art multiscale modelling of bacterial and fungal infections, considering both the pathogen and host as well as their interaction. Multiscale modelling of the interactions of bacteria, especially Mycobacterium tuberculosis, with the human host is quite advanced. In contrast, models for fungal infections are still in their infancy, in particular regarding infections with the most important human pathogenic fungi, Candida albicans and Aspergillus fumigatus. We reflect on the current availability of computational approaches for multiscale modelling of host–pathogen interactions and point out current challenges. Finally, we provide an outlook for future requirements of multiscale modelling. PMID:26857943

  5. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1994-06-28

    An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures.

  6. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.

    1994-01-01

    An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.

  7. A membrane computing simulator of trans-hierarchical antibiotic resistance evolution dynamics in nested ecological compartments (ARES).

    PubMed

    Campos, Marcelino; Llorens, Carlos; Sempere, José M; Futami, Ricardo; Rodriguez, Irene; Carrasco, Purificación; Capilla, Rafael; Latorre, Amparo; Coque, Teresa M; Moya, Andres; Baquero, Fernando

    2015-08-05

    Antibiotic resistance is a major biomedical problem upon which public health systems demand solutions to construe the dynamics and epidemiological risk of resistant bacteria in anthropogenically-altered environments. The implementation of computable models with reciprocity within and between levels of biological organization (i.e. essential nesting) is central for studying antibiotic resistances. Antibiotic resistance is not just the result of antibiotic-driven selection but more properly the consequence of a complex hierarchy of processes shaping the ecology and evolution of the distinct subcellular, cellular and supra-cellular vehicles involved in the dissemination of resistance genes. Such a complex background motivated us to explore the P-system standards of membrane computing an innovative natural computing formalism that abstracts the notion of movement across membranes to simulate antibiotic resistance evolution processes across nested levels of micro- and macro-environmental organization in a given ecosystem. In this article, we introduce ARES (Antibiotic Resistance Evolution Simulator) a software device that simulates P-system model scenarios with five types of nested computing membranes oriented to emulate a hierarchy of eco-biological compartments, i.e. a) peripheral ecosystem; b) local environment; c) reservoir of supplies; d) animal host; and e) host's associated bacterial organisms (microbiome). Computational objects emulating molecular entities such as plasmids, antibiotic resistance genes, antimicrobials, and/or other substances can be introduced into this framework and may interact and evolve together with the membranes, according to a set of pre-established rules and specifications. ARES has been implemented as an online server and offers additional tools for storage and model editing and downstream analysis. The stochastic nature of the P-system model implemented in ARES explicitly links within and between host dynamics into a simulation, with feedback reciprocity among the different units of selection influenced by antibiotic exposure at various ecological levels. ARES offers the possibility of modeling predictive multilevel scenarios of antibiotic resistance evolution that can be interrogated, edited and re-simulated if necessary, with different parameters, until a correct model description of the process in the real world is convincingly approached. ARES can be accessed at http://gydb.org/ares.

  8. Host-pathogen interactions between the human innate immune system and Candida albicans—understanding and modeling defense and evasion strategies

    PubMed Central

    Dühring, Sybille; Germerodt, Sebastian; Skerka, Christine; Zipfel, Peter F.; Dandekar, Thomas; Schuster, Stefan

    2015-01-01

    The diploid, polymorphic yeast Candida albicans is one of the most important human pathogenic fungi. C. albicans can grow, proliferate and coexist as a commensal on or within the human host for a long time. However, alterations in the host environment can render C. albicans virulent. In this review, we describe the immunological cross-talk between C. albicans and the human innate immune system. We give an overview in form of pairs of human defense strategies including immunological mechanisms as well as general stressors such as nutrient limitation, pH, fever etc. and the corresponding fungal response and evasion mechanisms. Furthermore, Computational Systems Biology approaches to model and investigate these complex interactions are highlighted with a special focus on game-theoretical methods and agent-based models. An outlook on interesting questions to be tackled by Systems Biology regarding entangled defense and evasion mechanisms is given. PMID:26175718

  9. Extending the granularity of representation and control for the MIL-STD CAIS 1.0 node model

    NASA Technical Reports Server (NTRS)

    Rogers, Kathy L.

    1986-01-01

    The Common APSE (Ada 1 Program Support Environment) Interface Set (CAIS) (DoD85) node model provides an excellent baseline for interfaces in a single-host development environment. To encompass the entire spectrum of computing, however, the CAIS model should be extended in four areas. It should provide the interface between the engineering workstation and the host system throughout the entire lifecycle of the system. It should provide a basis for communication and integration functions needed by distributed host environments. It should provide common interfaces for communications mechanisms to and among target processors. It should provide facilities for integration, validation, and verification of test beds extending to distributed systems on geographically separate processors with heterogeneous instruction set architectures (ISAS). Additions to the PROCESS NODE model to extend the CAIS into these four areas are proposed.

  10. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.

  11. Survey shows continued strong interest in UNIX applications for healthcare.

    PubMed

    Dunbar, C

    1993-03-01

    As part of the general computer industry movement toward open systems, many are predicting UNIX will become the dominant host operating system of the late 1990s. To better understand this prediction within the healthcare setting, Computers in Healthcare surveyed our readership about their opinions of UNIX, its current use and its relative importance as an information services strategy. The upshot? CIH readers definitely want more systems on UNIX, more healthcare applications written for UNIX and more trained resource people to help them with faster installation and more useful applications.

  12. Expert System Enhancement to the Resource Allocation Modules of the NCS Emergency Preparedness Management Information System (EPMIS)

    DTIC Science & Technology

    1987-01-01

    after the MYCIN expert system. Host Computer PC+ is available on both symbolic and numeric computers. It operates on: the IBM PC AT, TI Bus- Pro (IBM PC...suppose that the data baseTool picks up pace contains 100 motors, and in only one case does a lightweight motor pro . duce more power than heavier units...every sor, ART 2.0. In the bargain it con - the figure). decision point takes time. More sub- sumes 10 times less storage. ART 3.0 reduces the comparison

  13. Microgrids | Energy Systems Integration Facility | NREL

    Science.gov Websites

    Manager, Marine Corps Air Station (MCAS) Miramar Network Simulator-in-the-Loop Testing OMNeT++: simulates a network and links with real computers and virtual hosts. Power Hardware-in-the-Loop Simulation

  14. What's New in the Library Automation Arena?

    ERIC Educational Resources Information Center

    Breeding, Marshall

    1998-01-01

    Reviews trends in library automation based on vendors at the 1998 American Library Association Annual Conference. Discusses the major industry trend, a move from host-based computer systems to the new generation of client/server, object-oriented, open systems-based automation. Includes a summary of developments for 26 vendors. (LRW)

  15. Host-Based Multivariate Statistical Computer Operating Process Anomaly Intrusion Detection System (PAIDS)

    DTIC Science & Technology

    2009-03-01

    viii 3.2.3 Sub7 ...from TaskInfo in Excel Format. 3.2.3 Sub7 Also known as SubSeven, this is one of the best known, most widely distributed backdoor programs on the...engineering the spread of viruses, worms, backdoors and other malware. The Sub7 Trojan establishes a server on the victim computer that

  16. Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 1A: Summary

    NASA Technical Reports Server (NTRS)

    Miller, R. E., Jr.; Redhed, D. D.; Kawaguchi, A. S.; Hansen, S. D.; Southall, J. W.

    1973-01-01

    IPAD was defined as a total system oriented to the product design process. This total system was designed to recognize the product design process, individuals and their design process tasks, and the computer-based IPAD System to aid product design. Principal elements of the IPAD System include the host computer and its interactive system software, new executive and data management software, and an open-ended IPAD library of technical programs to match the intended product design process. The basic goal of the IPAD total system is to increase the productivity of the product design organization. Increases in individual productivity were feasible through automation and computer support of routine information handling. Such proven automation can directly decrease cost and flowtime in the product design process.

  17. Quantum Accelerators for High-performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less

  18. Front End Software for Online Database Searching Part 1: Definitions, System Features, and Evaluation.

    ERIC Educational Resources Information Center

    Hawkins, Donald T.; Levy, Louise R.

    1985-01-01

    This initial article in series of three discusses barriers inhibiting use of current online retrieval systems by novice users and notes reasons for front end and gateway online retrieval systems. Definitions, front end features, user interface, location (personal computer, host mainframe), evaluation, and strengths and weaknesses are covered. (16…

  19. Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 4: IPAD system design

    NASA Technical Reports Server (NTRS)

    Goldfarb, W.; Carpenter, L. C.; Redhed, D. D.; Hansen, S. D.; Anderson, L. O.; Kawaguchi, A. S.

    1973-01-01

    The computing system design of IPAD is described and the requirements which form the basis for the system design are discussed. The system is presented in terms of a functional design description and technical design specifications. The functional design specifications give the detailed description of the system design using top-down structured programming methodology. Human behavioral characteristics, which specify the system design at the user interface, security considerations, and standards for system design, implementation, and maintenance are also part of the technical design specifications. Detailed specifications of the two most common computing system types in use by the major aerospace companies which could support the IPAD system design are presented. The report of a study to investigate migration of IPAD software between the two candidate 3rd generation host computing systems and from these systems to a 4th generation system is included.

  20. Multi-man flight simulator

    NASA Technical Reports Server (NTRS)

    Macdonald, G.

    1983-01-01

    A prototype Air Traffic Control facility and multiman flight simulator facility was designed and one of the component simulators fabricated as a proof of concept. The facility was designed to provide a number of independent simple simulator cabs that would have the capability of some local, stand alone processing that would in turn interface with a larger host computer. The system can accommodate up to eight flight simulators (commercially available instrument trainers) which could be operated stand alone if no graphics were required or could operate in a common simulated airspace if connected to the host computer. A proposed addition to the original design is the capability of inputing pilot inputs and quantities displayed on the flight and navigation instruments to the microcomputer when the simulator operates in the stand alone mode to allow independent use of these commercially available instrument trainers for research. The conceptual design of the system and progress made to date on its implementation are described.

  1. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  2. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  3. Systems, methods and computer readable media for estimating capacity loss in rechargeable electrochemical cells

    DOEpatents

    Gering, Kevin L.

    2013-06-18

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples charge characteristics of the electrochemical cell. The computing system periodically determines cell information from the charge characteristics of the electrochemical cell. The computing system also periodically adds a first degradation characteristic from the cell information to a first sigmoid expression, periodically adds a second degradation characteristic from the cell information to a second sigmoid expression and combines the first sigmoid expression and the second sigmoid expression to develop or augment a multiple sigmoid model (MSM) of the electrochemical cell. The MSM may be used to estimate a capacity loss of the electrochemical cell at a desired point in time and analyze other characteristics of the electrochemical cell. The first and second degradation characteristics may be loss of active host sites and loss of free lithium for Li-ion cells.

  4. An automated procedure for developing hybrid computer simulations of turbofan engines

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.

    1980-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all of the calculations and date manipulations needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self contained engine model to match specified design point information. A test case is described and comparisons between hybrid simulation and specified engine performance data are presented.

  5. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  6. Waggle: A Framework for Intelligent Attentive Sensing and Actuation

    NASA Astrophysics Data System (ADS)

    Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.

    2014-12-01

    Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.

  7. Teleoperated control system for underground room and pillar mining

    DOEpatents

    Mayercheck, William D.; Kwitowski, August J.; Brautigam, Albert L.; Mueller, Brian K.

    1992-01-01

    A teleoperated mining system is provided for remotely controlling the various machines involved with thin seam mining. A thin seam continuous miner located at a mining face includes a camera mounted thereon and a slave computer for controlling the miner and the camera. A plurality of sensors for relaying information about the miner and the face to the slave computer. A slave computer controlled ventilation sub-system which removes combustible material from the mining face. A haulage sub-system removes material mined by the continuous miner from the mining face to a collection site and is also controlled by the slave computer. A base station, which controls the supply of power and water to the continuous miner, haulage system, and ventilation systems, includes cable/hose handling module for winding or unwinding cables/hoses connected to the miner, an operator control module, and a hydraulic power and air compressor module for supplying air to the miner. An operator controlled host computer housed in the operator control module is connected to the slave computer via a two wire communications line.

  8. Technologies and Approaches to Elucidate and Model the Virulence Program of Salmonella.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Jason E.; Yoon, Hyunjin; Nakayasu, Ernesto S.

    Salmonella is a primary cause of enteric diseases in a variety of animals. During its evolution into a pathogenic bacterium, Salmonella acquired an elaborate regulatory network that responds to multiple environmental stimuli within host animals and integrates them resulting in fine regulation of the virulence program. The coordinated action by this regulatory network involves numerous virulence regulators, necessitating genome-wide profiling analysis to assess and combine efforts from multiple regulons. In this review we discuss recent high-throughput analytic approaches to understand the regulatory network of Salmonella that controls virulence processes. Application of high-throughput analyses have generated a large amount of datamore » and driven development of computational approaches required for data integration. Therefore, we also cover computer-aided network analyses to infer regulatory networks, and demonstrate how genome-scale data can be used to construct regulatory and metabolic systems models of Salmonella pathogenesis. Genes that are coordinately controlled by multiple virulence regulators under infectious conditions are more likely to be important for pathogenesis. Thus, reconstructing the global regulatory network during infection or, at the very least, under conditions that mimic the host cellular environment not only provides a bird’s eye view of Salmonella survival strategy in response to hostile host environments but also serves as an efficient means to identify novel virulence factors that are essential for Salmonella to accomplish systemic infection in the host.« less

  9. CMG-Biotools, a Free Workbench for Basic Comparative Microbial Genomics

    PubMed Central

    Vesth, Tammi; Lagesen, Karin; Acar, Öncel; Ussery, David

    2013-01-01

    Background Today, there are more than a hundred times as many sequenced prokaryotic genomes than were present in the year 2000. The economical sequencing of genomic DNA has facilitated a whole new approach to microbial genomics. The real power of genomics is manifested through comparative genomics that can reveal strain specific characteristics, diversity within species and many other aspects. However, comparative genomics is a field not easily entered into by scientists with few computational skills. The CMG-biotools package is designed for microbiologists with limited knowledge of computational analysis and can be used to perform a number of analyses and comparisons of genomic data. Results The CMG-biotools system presents a stand-alone interface for comparative microbial genomics. The package is a customized operating system, based on Xubuntu 10.10, available through the open source Ubuntu project. The system can be installed on a virtual computer, allowing the user to run the system alongside any other operating system. Source codes for all programs are provided under GNU license, which makes it possible to transfer the programs to other systems if so desired. We here demonstrate the package by comparing and analyzing the diversity within the class Negativicutes, represented by 31 genomes including 10 genera. The analyses include 16S rRNA phylogeny, basic DNA and codon statistics, proteome comparisons using BLAST and graphical analyses of DNA structures. Conclusion This paper shows the strength and diverse use of the CMG-biotools system. The system can be installed on a vide range of host operating systems and utilizes as much of the host computer as desired. It allows the user to compare multiple genomes, from various sources using standardized data formats and intuitive visualizations of results. The examples presented here clearly shows that users with limited computational experience can perform complicated analysis without much training. PMID:23577086

  10. Magnet measurement interfacing to the G-64 Euro standard bus and testing G-64 modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogrefe, R.L.

    1995-07-01

    The Magnet Measurement system utilizes various modules with a G-64 Euro (Gespac) Standard Interface. All modules are designed to be software controlled, normally under the constraints of the OS-9 operating system with all data transfers to a host computer accomplished by a serial link.

  11. System Architectural Concepts: Army Battlefield Command and Control Information Utility (CCIU).

    DTIC Science & Technology

    1982-07-25

    produce (device-type), the computers they may interface with (required- host), and the identification number of the devices (device- number). Line- printers ...interface in a network PE ( ZINK Sol. A-5 GLOSSARY Kernel A layer of the PEOS; implements the basic system primitives. LUS Local Name Space Locking A

  12. NASA Astrophysics Data System (ADS)

    Knosp, B.; Neely, S.; Zimdars, P.; Mills, B.; Vance, N.

    2007-12-01

    The Microwave Limb Sounder (MLS) Science Computing Facility (SCF) stores over 50 terabytes of data, has over 240 computer processing hosts, and 64 users from around the world. These resources are spread over three primary geographical locations - the Jet Propulsion Laboratory (JPL), Raytheon RIS, and New Mexico Institute of Mining and Technology (NMT). A need for a grid network system was identified and defined to solve the problem of users competing for finite, and increasingly scarce, MLS SCF computing resources. Using Sun's Grid Engine software, a grid network was successfully created in a development environment that connected the JPL and Raytheon sites, established master and slave hosts, and demonstrated that transfer queues for jobs can work among multiple clusters in the same grid network. This poster will first describe MLS SCF resources and the lessons that were learned in the design and development phase of this project. It will then go on to discuss the test environment and plans for deployment by highlighting benchmarks and user experiences.

  13. Optical RISC computer

    NASA Astrophysics Data System (ADS)

    Guilfoyle, Peter S.; Stone, Richard V.; Hessenbruch, John M.; Zeise, Frederick F.

    1993-07-01

    A second generation digital optical computer (DOC II) has been developed which utilizes a RISC based operating system as its host. This 32 bit, high performance (12.8 GByte/sec), computing platform demonstrates a number of basic principals that are inherent to parallel free space optical interconnects such as speed (up to 1012 bit operations per second) and low power 1.2 fJ per bit). Although DOC II is a general purpose machine, special purpose applications have been developed and are currently being evaluated on the optical platform.

  14. The development of an interim generalized gate logic software simulator

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Nemeroff, S.

    1985-01-01

    A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer.

  15. BehavePlus fire modeling system, version 5.0: Design and Features

    Treesearch

    Faith Ann Heinsch; Patricia L. Andrews

    2010-01-01

    The BehavePlus fire modeling system is a computer program that is based on mathematical models that describe wildland fire behavior and effects and the fire environment. It is a flexible system that produces tables, graphs, and simple diagrams. It can be used for a host of fire management applications, including projecting the behavior of an ongoing fire, planning...

  16. Systems Biology-Based Investigation of Cellular Antiviral Drug Targets Identified by Gene-Trap Insertional Mutagenesis.

    PubMed

    Cheng, Feixiong; Murray, James L; Zhao, Junfei; Sheng, Jinsong; Zhao, Zhongming; Rubin, Donald H

    2016-09-01

    Viruses require host cellular factors for successful replication. A comprehensive systems-level investigation of the virus-host interactome is critical for understanding the roles of host factors with the end goal of discovering new druggable antiviral targets. Gene-trap insertional mutagenesis is a high-throughput forward genetics approach to randomly disrupt (trap) host genes and discover host genes that are essential for viral replication, but not for host cell survival. In this study, we used libraries of randomly mutagenized cells to discover cellular genes that are essential for the replication of 10 distinct cytotoxic mammalian viruses, 1 gram-negative bacterium, and 5 toxins. We herein reported 712 candidate cellular genes, characterizing distinct topological network and evolutionary signatures, and occupying central hubs in the human interactome. Cell cycle phase-specific network analysis showed that host cell cycle programs played critical roles during viral replication (e.g. MYC and TAF4 regulating G0/1 phase). Moreover, the viral perturbation of host cellular networks reflected disease etiology in that host genes (e.g. CTCF, RHOA, and CDKN1B) identified were frequently essential and significantly associated with Mendelian and orphan diseases, or somatic mutations in cancer. Computational drug repositioning framework via incorporating drug-gene signatures from the Connectivity Map into the virus-host interactome identified 110 putative druggable antiviral targets and prioritized several existing drugs (e.g. ajmaline) that may be potential for antiviral indication (e.g. anti-Ebola). In summary, this work provides a powerful methodology with a tight integration of gene-trap insertional mutagenesis testing and systems biology to identify new antiviral targets and drugs for the development of broadly acting and targeted clinical antiviral therapeutics.

  17. A Model of an Integrated Immune System Pathway in Homo sapiens and Its Interaction with Superantigen Producing Expression Regulatory Pathway in Staphylococcus aureus: Comparing Behavior of Pathogen Perturbed and Unperturbed Pathway

    PubMed Central

    Tomar, Namrata; De, Rajat K.

    2013-01-01

    Response of an immune system to a pathogen attack depends on the balance between the host immune defense and the virulence of the pathogen. Investigation of molecular interactions between the proteins of a host and a pathogen helps in identifying the pathogenic proteins. It is necessary to understand the dynamics of a normally behaved host system to evaluate the capacity of its immune system upon pathogen attack. In this study, we have compared the behavior of an unperturbed and pathogen perturbed host system. Moreover, we have developed a formalism under Flux Balance Analysis (FBA) for the optimization of conflicting objective functions. We have constructed an integrated pathway system, which includes Staphylococcal Superantigen (SAg) expression regulatory pathway and TCR signaling pathway of Homo sapiens. We have implemented the method on this pathway system and observed the behavior of host signaling molecules upon pathogen attack. The entire study has been divided into six different cases, based on the perturbed/unperturbed conditions. In other words, we have investigated unperturbed and pathogen perturbed human TCR signaling pathway, with different combinations of optimization of concentrations of regulatory and signaling molecules. One of these cases has aimed at finding out whether minimization of the toxin production in a pathogen leads to the change in the concentration levels of the proteins coded by TCR signaling pathway genes in the infected host. Based on the computed results, we have hypothesized that the balance between TCR signaling inhibitory and stimulatory molecules can keep TCR signaling system into resting/stimulating state, depending upon the perturbation. The proposed integrated host-pathogen interaction pathway model has accurately reflected the experimental evidences, which we have used for validation purpose. The significance of this kind of investigation lies in revealing the susceptible interaction points that can take back the Staphylococcal Enterotoxin (SE)-challenged system within the range of normal behavior. PMID:24324645

  18. A Systems Biology Approach to Infectious Disease Research: Innovating the Pathogen-Host Research Paradigm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderem, Alan; Adkins, Joshua N.; Ansong, Charles

    The 20th century was marked by extraordinary advances in our understanding of microbes and infectious disease, but pandemics remain, food and water borne illnesses are frequent, multi-drug resistant microbes are on the rise, and the needed drugs and vaccines have not been developed. The scientific approaches of the past—including the intense focus on individual genes and proteins typical of molecular biology—have not been sufficient to address these challenges. The first decade of the 21st century has seen remarkable innovations in technology and computational methods. These new tools provide nearly comprehensive views of complex biological systems and can provide a correspondinglymore » deeper understanding of pathogen-host interactions. To take full advantage of these innovations, the National Institute of Allergy and Infectious Diseases recently initiated the Systems Biology Program for Infectious Disease Research. As participants of the Systems Biology Program we think that the time is at hand to redefine the pathogen-host research paradigm.« less

  19. A Systems Biology Approach to Infectious Disease Research: Innovating the Pathogen-Host Research Paradigm

    PubMed Central

    Aderem, Alan; Adkins, Joshua N.; Ansong, Charles; Galagan, James; Kaiser, Shari; Korth, Marcus J.; Law, G. Lynn; McDermott, Jason G.; Proll, Sean C.; Rosenberger, Carrie; Schoolnik, Gary; Katze, Michael G.

    2011-01-01

    The twentieth century was marked by extraordinary advances in our understanding of microbes and infectious disease, but pandemics remain, food and waterborne illnesses are frequent, multidrug-resistant microbes are on the rise, and the needed drugs and vaccines have not been developed. The scientific approaches of the past—including the intense focus on individual genes and proteins typical of molecular biology—have not been sufficient to address these challenges. The first decade of the twenty-first century has seen remarkable innovations in technology and computational methods. These new tools provide nearly comprehensive views of complex biological systems and can provide a correspondingly deeper understanding of pathogen-host interactions. To take full advantage of these innovations, the National Institute of Allergy and Infectious Diseases recently initiated the Systems Biology Program for Infectious Disease Research. As participants of the Systems Biology Program, we think that the time is at hand to redefine the pathogen-host research paradigm. PMID:21285433

  20. Scattering Properties of Heterogeneous Mineral Particles with Absorbing Inclusions

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2015-01-01

    We analyze the results of numerically exact computer modeling of scattering and absorption properties of randomly oriented poly-disperse heterogeneous particles obtained by placing microscopic absorbing grains randomly on the surfaces of much larger spherical mineral hosts or by imbedding them randomly inside the hosts. These computations are paralleled by those for heterogeneous particles obtained by fully encapsulating fractal-like absorbing clusters in the mineral hosts. All computations are performed using the superposition T-matrix method. In the case of randomly distributed inclusions, the results are compared with the outcome of Lorenz-Mie computations for an external mixture of the mineral hosts and absorbing grains. We conclude that internal aggregation can affect strongly both the integral radiometric and differential scattering characteristics of the heterogeneous particle mixtures.

  1. The study on servo-control system in the large aperture telescope

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Zhenchao, Zhang; Daxing, Wang

    2008-08-01

    Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.

  2. The automation of an inlet mass flow control system

    NASA Technical Reports Server (NTRS)

    Supplee, Frank; Tcheng, Ping; Weisenborn, Michael

    1989-01-01

    The automation of a closed-loop computer controlled system for the inlet mass flow system (IMFS) developed for a wind tunnel facility at Langley Research Center is presented. This new PC based control system is intended to replace the manual control system presently in use in order to fully automate the plug positioning of the IMFS during wind tunnel testing. Provision is also made for communication between the PC and a host-computer in order to allow total animation of the plug positioning and data acquisition during the complete sequence of predetermined plug locations. As extensive running time is programmed for the IMFS, this new automated system will save both manpower and tunnel running time.

  3. The load shedding advisor: An example of a crisis-response expert system

    NASA Technical Reports Server (NTRS)

    Bollinger, Terry B.; Lightner, Eric; Laverty, John; Ambrose, Edward

    1987-01-01

    A Prolog-based prototype expert system is described that was implemented by the Network Operations Branch of the NASA Goddard Space Flight Center. The purpose of the prototype was to test whether a small, inexpensive computer system could be used to host a load shedding advisor, a system which would monitor major physical environment parameters in a computer facility, then recommend appropriate operator reponses whenever a serious condition was detected. The resulting prototype performed significantly to efficiency gains achieved by replacing a purely rule-based design methodology with a hybrid approach that combined procedural, entity-relationship, and rule-based methods.

  4. Two-Way Communication Using RFID Equipment and Techniques

    NASA Technical Reports Server (NTRS)

    Jedry, Thomas; Archer, Eric

    2007-01-01

    Equipment and techniques used in radio-frequency identification (RFID) would be extended, according to a proposal, to enable short-range, two-way communication between electronic products and host computers. In one example of a typical contemplated application, the purpose of the short-range radio communication would be to transfer image data from a user s digital still or video camera to the user s computer for recording and/or processing. The concept is also applicable to consumer electronic products other than digital cameras (for example, cellular telephones, portable computers, or motion sensors in alarm systems), and to a variety of industrial and scientific sensors and other devices that generate data. Until now, RFID has been used to exchange small amounts of mostly static information for identifying and tracking assets. Information pertaining to an asset (typically, an object in inventory to be tracked) is contained in miniature electronic circuitry in an RFID tag attached to the object. Conventional RFID equipment and techniques enable a host computer to read data from and, in some cases, to write data to, RFID tags, but they do not enable such additional functions as sending commands to, or retrieving possibly large quantities of dynamic data from, RFID-tagged devices. The proposal would enable such additional functions. The figure schematically depicts an implementation of the proposal for a sensory device (e.g., a digital camera) that includes circuitry that converts sensory information to digital data. In addition to the basic sensory device, there would be a controller and a memory that would store the sensor data and/or data from the controller. The device would also be equipped with a conventional RFID chipset and antenna, which would communicate with a host computer via an RFID reader. The controller would function partly as a communication interface, implementing two-way communication protocols at all levels (including RFID if needed) between the sensory device and the memory and between the host computer and the memory. The controller would perform power V

  5. Computational prediction of secretion systems and secretomes of Brucella: identification of novel type IV effectors and their interaction with the host.

    PubMed

    Sankarasubramanian, Jagadesan; Vishnu, Udayakumar S; Dinakaran, Vasudevan; Sridhar, Jayavel; Gunasekaran, Paramasamy; Rajendhran, Jeyaprakash

    2016-01-01

    Brucella spp. are facultative intracellular pathogens that cause brucellosis in various mammals including humans. Brucella survive inside the host cells by forming vacuoles and subverting host defence systems. This study was aimed to predict the secretion systems and the secretomes of Brucella spp. from 39 complete genome sequences available in the databases. Furthermore, an attempt was made to identify the type IV secretion effectors and their interactions with host proteins. We predicted the secretion systems of Brucella by the KEGG pathway and SecReT4. Brucella secretomes and type IV effectors (T4SEs) were predicted through genome-wide screening using JVirGel and S4TE, respectively. Protein-protein interactions of Brucella T4SEs with their hosts were analyzed by HPIDB 2.0. Genes coding for Sec and Tat pathways of secretion and type I (T1SS), type IV (T4SS) and type V (T5SS) secretion systems were identified and they are conserved in all the species of Brucella. In addition to the well-known VirB operon coding for the type IV secretion system (T4SS), we have identified the presence of additional genes showing homology with T4SS of other organisms. On the whole, 10.26 to 14.94% of total proteomes were found to be either secreted (secretome) or membrane associated (membrane proteome). Approximately, 1.7 to 3.0% of total proteomes were identified as type IV secretion effectors (T4SEs). Prediction of protein-protein interactions showed 29 and 36 host-pathogen specific interactions between Bos taurus (cattle)-B. abortus and Ovis aries (sheep)-B. melitensis, respectively. Functional characterization of the predicted T4SEs and their interactions with their respective hosts may reveal the secrets of host specificity of Brucella.

  6. Evaluating virtual hosted desktops for graphics-intensive astronomy

    NASA Astrophysics Data System (ADS)

    Meade, B. F.; Fluke, C. J.

    2018-04-01

    Visualisation of data is critical to understanding astronomical phenomena. Today, many instruments produce datasets that are too big to be downloaded to a local computer, yet many of the visualisation tools used by astronomers are deployed only on desktop computers. Cloud computing is increasingly used to provide a computation and simulation platform in astronomy, but it also offers great potential as a visualisation platform. Virtual hosted desktops, with graphics processing unit (GPU) acceleration, allow interactive, graphics-intensive desktop applications to operate co-located with astronomy datasets stored in remote data centres. By combining benchmarking and user experience testing, with a cohort of 20 astronomers, we investigate the viability of replacing physical desktop computers with virtual hosted desktops. In our work, we compare two Apple MacBook computers (one old and one new, representing hardware and opposite ends of the useful lifetime) with two virtual hosted desktops: one commercial (Amazon Web Services) and one in a private research cloud (the Australian NeCTAR Research Cloud). For two-dimensional image-based tasks and graphics-intensive three-dimensional operations - typical of astronomy visualisation workflows - we found that benchmarks do not necessarily provide the best indication of performance. When compared to typical laptop computers, virtual hosted desktops can provide a better user experience, even with lower performing graphics cards. We also found that virtual hosted desktops are equally simple to use, provide greater flexibility in choice of configuration, and may actually be a more cost-effective option for typical usage profiles.

  7. Computational approaches to predict bacteriophage–host relationships

    PubMed Central

    Edwards, Robert A.; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E.

    2015-01-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus–host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage–host signals. Sequence homology approaches are the most effective at identifying known phage–host pairs. Compositional and abundance-based methods contain significant signal for phage–host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage–host relationships, with potential relevance for medical and industrial applications. PMID:26657537

  8. System support software for the Space Ultrareliable Modular Computer (SUMC)

    NASA Technical Reports Server (NTRS)

    Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.

    1974-01-01

    The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.

  9. Holo-Chidi video concentrator card

    NASA Astrophysics Data System (ADS)

    Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.

    2001-12-01

    The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control programs and data for the VCC. The Card also generates the control signals used by the scanning mirrors of Mark-II. In this paper we discuss the design of the VCC and its implementation in the Holo-Chidi system.

  10. Colour, vision and coevolution in avian brood parasitism.

    PubMed

    Stoddard, Mary Caswell; Hauber, Mark E

    2017-07-05

    The coevolutionary interactions between avian brood parasites and their hosts provide a powerful system for investigating the diversity of animal coloration. Specifically, reciprocal selection pressure applied by hosts and brood parasites can give rise to novel forms and functions of animal coloration, which largely differ from those that arise when selection is imposed by predators or mates. In the study of animal colours, avian brood parasite-host dynamics therefore invite special consideration. Rapid advances across disciplines have paved the way for an integrative study of colour and vision in brood parasite-host systems. We now know that visually driven host defences and host life history have selected for a suite of phenotypic adaptations in parasites, including mimicry, crypsis and supernormal stimuli. This sometimes leads to vision-based host counter-adaptations and increased parasite trickery. Here, we review vision-based adaptations that arise in parasite-host interactions, emphasizing that these adaptations can be visual/sensory, cognitive or phenotypic in nature. We highlight recent breakthroughs in chemistry, genomics, neuroscience and computer vision, and we conclude by identifying important future directions. Moving forward, it will be essential to identify the genetic and neural bases of adaptation and to compare vision-based adaptations to those arising in other sensory modalities.This article is part of the themed issue 'Animal coloration: production, perception, function and application'. © 2017 The Author(s).

  11. Modelling the effects of phylogeny and body size on within-host pathogen replication and immune response.

    PubMed

    Banerjee, Soumya; Perelson, Alan S; Moses, Melanie

    2017-11-01

    Understanding how quickly pathogens replicate and how quickly the immune system responds is important for predicting the epidemic spread of emerging pathogens. Host body size, through its correlation with metabolic rates, is theoretically predicted to impact pathogen replication rates and immune system response rates. Here, we use mathematical models of viral time courses from multiple species of birds infected by a generalist pathogen (West Nile Virus; WNV) to test more thoroughly how disease progression and immune response depend on mass and host phylogeny. We use hierarchical Bayesian models coupled with nonlinear dynamical models of disease dynamics to incorporate the hierarchical nature of host phylogeny. Our analysis suggests an important role for both host phylogeny and species mass in determining factors important for viral spread such as the basic reproductive number, WNV production rate, peak viraemia in blood and competency of a host to infect mosquitoes. Our model is based on a principled analysis and gives a quantitative prediction for key epidemiological determinants and how they vary with species mass and phylogeny. This leads to new hypotheses about the mechanisms that cause certain taxonomic groups to have higher viraemia. For example, our models suggest that higher viral burst sizes cause corvids to have higher levels of viraemia and that the cellular rate of virus production is lower in larger species. We derive a metric of competency of a host to infect disease vectors and thereby sustain the disease between hosts. This suggests that smaller passerine species are highly competent at spreading the disease compared with larger non-passerine species. Our models lend mechanistic insight into why some species (smaller passerine species) are pathogen reservoirs and some (larger non-passerine species) are potentially dead-end hosts for WNV. Our techniques give insights into the role of body mass and host phylogeny in the spread of WNV and potentially other zoonotic diseases. The major contribution of this work is a computational framework for infectious disease modelling at the within-host level that leverages data from multiple species. This is likely to be of interest to modellers of infectious diseases that jump species barriers and infect multiple species. Our method can be used to computationally determine the competency of a host to infect mosquitoes that will sustain WNV and other zoonotic diseases. We find that smaller passerine species are more competent in spreading the disease than larger non-passerine species. This suggests the role of host phylogeny as an important determinant of within-host pathogen replication. Ultimately, we view our work as an important step in linking within-host viral dynamics models to between-host models that determine spread of infectious disease between different hosts. © 2017 The Author(s).

  12. Smart command recognizer (SCR) - For development, test, and implementation of speech commands

    NASA Technical Reports Server (NTRS)

    Simpson, Carol A.; Bunnell, John W.; Krones, Robert R.

    1988-01-01

    The SCR, a rapid prototyping system for the development, testing, and implementation of speech commands in a flight simulator or test aircraft, is described. A single unit performs all functions needed during these three phases of system development, while the use of common software and speech command data structure files greatly reduces the preparation time for successive development phases. As a smart peripheral to a simulation or flight host computer, the SCR interprets the pilot's spoken input and passes command codes to the simulation or flight computer.

  13. Active Cyber Defense: Enhancing National Cyber Defense

    DTIC Science & Technology

    2011-12-01

    Prevention System ISP Internet Service Provider IT Information Technology IWM Information Warfare Monitor LOAC Law of Armed Conflict NATO...the Information Warfare Monitor ( IWM ) discovered that GhostNet had infected 1,295 computers in 103 countries. As many as thirty percent of these...By monitoring the computers in Dharamsala and at various Tibetan missions, IWM was able to determine the IP addresses of the servers hosting Gh0st

  14. Force and Stress along Simulated Dissociation Pathways of Cucurbituril-Guest Systems.

    PubMed

    Velez-Vega, Camilo; Gilson, Michael K

    2012-03-13

    The field of host-guest chemistry provides computationally tractable yet informative model systems for biomolecular recognition. We applied molecular dynamics simulations to study the forces and mechanical stresses associated with forced dissociation of aqueous cucurbituril-guest complexes with high binding affinities. First, the unbinding transitions were modeled with constant velocity pulling (steered dynamics) and a soft spring constant, to model atomic force microscopy (AFM) experiments. The computed length-force profiles yield rupture forces in good agreement with available measurements. We also used steered dynamics with high spring constants to generate paths characterized by a tight control over the specified pulling distance; these paths were then equilibrated via umbrella sampling simulations and used to compute time-averaged mechanical stresses along the dissociation pathways. The stress calculations proved to be informative regarding the key interactions determining the length-force profiles and rupture forces. In particular, the unbinding transition of one complex is found to be a stepwise process, which is initially dominated by electrostatic interactions between the guest's ammoniums and the host's carbonyl groups, and subsequently limited by the extraction of the guest's bulky bicyclooctane moiety; the latter step requires some bond stretching at the cucurbituril's extraction portal. Conversely, the dissociation of a second complex with a more slender guest is mainly driven by successive electrostatic interactions between the different guest's ammoniums and the host's carbonyl groups. The calculations also provide information on the origins of thermodynamic irreversibilities in these forced dissociation processes.

  15. Systems and methods for performing wireless financial transactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCown, Steven Harvey

    2012-07-03

    A secure computing module (SCM) is configured for connection with a host device. The SCM includes a processor for performing secure processing operations, a host interface for coupling the processor to the host device, and a memory connected to the processor wherein the processor logically isolates at least some of the memory from access by the host device. The SCM also includes a proximate-field wireless communicator connected to the processor to communicate with another SCM associated with another host device. The SCM generates a secure digital signature for a financial transaction package and communicates the package and the signature tomore » the other SCM using the proximate-field wireless communicator. Financial transactions are performed from person to person using the secure digital signature of each person's SCM and possibly message encryption. The digital signatures and transaction details are communicated to appropriate financial organizations to authenticate the transaction parties and complete the transaction.« less

  16. An Advanced Commanding and Telemetry System

    NASA Astrophysics Data System (ADS)

    Hill, Maxwell G. G.

    The Loral Instrumentation System 500 configured as an Advanced Commanding and Telemetry System (ACTS) supports the acquisition of multiple telemetry downlink streams, and simultaneously supports multiple uplink command streams for today's satellite vehicles. By using industry and federal standards, the system is able to support, without relying on a host computer, a true distributed dataflow architecture that is complemented by state-of-the-art RISC-based workstations and file servers.

  17. Cloud Computing: An Overview

    NASA Astrophysics Data System (ADS)

    Qian, Ling; Luo, Zhiguo; Du, Yujian; Guo, Leitao

    In order to support the maximum number of user and elastic service with the minimum resource, the Internet service provider invented the cloud computing. within a few years, emerging cloud computing has became the hottest technology. From the publication of core papers by Google since 2003 to the commercialization of Amazon EC2 in 2006, and to the service offering of AT&T Synaptic Hosting, the cloud computing has been evolved from internal IT system to public service, from cost-saving tools to revenue generator, and from ISP to telecom. This paper introduces the concept, history, pros and cons of cloud computing as well as the value chain and standardization effort.

  18. Path planning on cellular nonlinear network using active wave computing technique

    NASA Astrophysics Data System (ADS)

    Yeniçeri, Ramazan; Yalçın, Müstak E.

    2009-05-01

    This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.

  19. Forensic Carving of Network Packets and Associated Data Structures

    DTIC Science & Technology

    2011-01-01

    establishment of prior connection activity and services used; identification of other systems present on the system’s LAN or WLAN; geolocation of the...identification of other systems present on the system?s LAN or WLAN; geolocation of the host computer system; and cross-drive analysis. We show that network...Finally, our work in geolocation was assisted by geo- location databases created by companies such as Google (Google Mobile, 2011) and Skyhook

  20. Computational approaches for discovery of common immunomodulators in fungal infections: towards broad-spectrum immunotherapeutic interventions.

    PubMed

    Kidane, Yared H; Lawrence, Christopher; Murali, T M

    2013-10-07

    Fungi are the second most abundant type of human pathogens. Invasive fungal pathogens are leading causes of life-threatening infections in clinical settings. Toxicity to the host and drug-resistance are two major deleterious issues associated with existing antifungal agents. Increasing a host's tolerance and/or immunity to fungal pathogens has potential to alleviate these problems. A host's tolerance may be improved by modulating the immune system such that it responds more rapidly and robustly in all facets, ranging from the recognition of pathogens to their clearance from the host. An understanding of biological processes and genes that are perturbed during attempted fungal exposure, colonization, and/or invasion will help guide the identification of endogenous immunomodulators and/or small molecules that activate host-immune responses such as specialized adjuvants. In this study, we present computational techniques and approaches using publicly available transcriptional data sets, to predict immunomodulators that may act against multiple fungal pathogens. Our study analyzed data sets derived from host cells exposed to five fungal pathogens, namely, Alternaria alternata, Aspergillus fumigatus, Candida albicans, Pneumocystis jirovecii, and Stachybotrys chartarum. We observed statistically significant associations between host responses to A. fumigatus and C. albicans. Our analysis identified biological processes that were consistently perturbed by these two pathogens. These processes contained both immune response-inducing genes such as MALT1, SERPINE1, ICAM1, and IL8, and immune response-repressing genes such as DUSP8, DUSP6, and SPRED2. We hypothesize that these genes belong to a pool of common immunomodulators that can potentially be activated or suppressed (agonized or antagonized) in order to render the host more tolerant to infections caused by A. fumigatus and C. albicans. Our computational approaches and methodologies described here can now be applied to newly generated or expanded data sets for further elucidation of additional drug targets. Moreover, identified immunomodulators may be used to generate experimentally testable hypotheses that could help in the discovery of broad-spectrum immunotherapeutic interventions. All of our results are available at the following supplementary website: http://bioinformatics.cs.vt.edu/~murali/supplements/2013-kidane-bmc.

  1. Watching elderly and disabled person's physical condition by remotely controlled monorail robot

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru

    2001-10-01

    We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.

  2. GRAPE-4: A special-purpose computer for gravitational N-body problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makino, Junichiro; Taiji, Makoto; Ebisuzaki, Toshikazu

    1995-12-01

    We describe GRAPE-4, a special-purpose computer for gravitational N-body simulations. In gravitational N-body simulations, almost all computing time is spent for the calculation of interaction between particles. GRAPE-4 is a specialized hardware to calculate the interaction between particles. It is used with a general-purpose host computer that performs all calculations other than the force calculation. With this architecture, it is relatively easy to realize a massively parallel system. In 1991, we developed the GRAPE-3 system with the peak speed equivalent to 14.4 Gflops. It consists of 48 custom pipelined processors. In 1992 we started the development of GRAPE-4. The GRAPE-4more » system will consist of 1920 custom pipeline chips. Each chip has the speed of 600 Mflops, when operated on 30 MHz clock. A prototype system with two custom LSIs has been completed July 1994, and the full system is now under manufacturing.« less

  3. AdaNET phase 0 support for the AdaNET Dynamic Software Inventory (DSI) management system prototype. Catalog of available reusable software components

    NASA Technical Reports Server (NTRS)

    Hanley, Lionel

    1989-01-01

    The Ada Software Repository is a public-domain collection of Ada software and information. The Ada Software Repository is one of several repositories located on the SIMTEL20 Defense Data Network host computer at White Sands Missile Range, and available to any host computer on the network since 26 November 1984. This repository provides a free source for Ada programs and information. The Ada Software Repository is divided into several subdirectories. These directories are organized by topic, and their names and a brief overview of their topics are contained. The Ada Software Repository on SIMTEL20 serves two basic roles: to promote the exchange and use (reusability) of Ada programs and tools (including components) and to promote Ada education.

  4. Automated procedure for developing hybrid computer simulations of turbofan engines. Part 1: General description

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.; Bruton, W. M.

    1982-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.

  5. Proceedings of the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering - M and C 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2013-07-01

    The Mathematics and Computation Division of the American Nuclear (ANS) and the Idaho Section of the ANS hosted the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M and C 2013). This proceedings contains over 250 full papers with topics ranging from reactor physics; radiation transport; materials science; nuclear fuels; core performance and optimization; reactor systems and safety; fluid dynamics; medical applications; analytical and numerical methods; algorithms for advanced architectures; and validation verification, and uncertainty quantification.

  6. Software development to support sensor control of robot arc welding

    NASA Technical Reports Server (NTRS)

    Silas, F. R., Jr.

    1986-01-01

    The development of software for a Digital Equipment Corporation MINC-23 Laboratory Computer to provide functions of a workcell host computer for Space Shuttle Main Engine (SSME) robotic welding is documented. Routines were written to transfer robot programs between the MINC and an Advanced Robotic Cyro 750 welding robot. Other routines provide advanced program editing features while additional software allows communicatin with a remote computer aided design system. Access to special robot functions were provided to allow advanced control of weld seam tracking and process control for future development programs.

  7. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  8. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  9. The Control Point Library Building System. [for Landsat MSS and RBV geometric image correction

    NASA Technical Reports Server (NTRS)

    Niblack, W.

    1981-01-01

    The Earth Resources Observation System (EROS) Data Center in Sioux Falls, South Dakota distributes precision corrected Landsat MSS and RBV data. These data are derived from master data tapes produced by the Master Data Processor (MDP), NASA's system for computing and applying corrections to the data. Included in the MDP is the Control Point Library Building System (CPLBS), an interactive, menu-driven system which permits a user to build and maintain libraries of control points. The control points are required to achieve the high geometric accuracy desired in the output MSS and RBV data. This paper describes the processing performed by CPLBS, the accuracy of the system, and the host computer and special image viewing equipment employed.

  10. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  11. An Artificial Neural Network-Based Decision-Support System for Integrated Network Security

    DTIC Science & Technology

    2014-09-01

    group that they need to know in order to make team-based decisions in real-time environments, (c) Employ secure cloud computing services to host mobile...THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force...out-of-the-loop syndrome and create complexity creep. As a result, full automation efforts can lead to inappropriate decision-making despite a

  12. Bioinspired decision architectures containing host and microbiome processing units.

    PubMed

    Heyde, K C; Gallagher, P W; Ruder, W C

    2016-09-27

    Biomimetic robots have been used to explore and explain natural phenomena ranging from the coordination of ants to the locomotion of lizards. Here, we developed a series of decision architectures inspired by the information exchange between a host organism and its microbiome. We first modeled the biochemical exchanges of a population of synthetically engineered E. coli. We then built a physical, differential drive robot that contained an integrated, onboard computer vision system. A relay was established between the simulated population of cells and the robot's microcontroller. By placing the robot within a target-containing a two-dimensional arena, we explored how different aspects of the simulated cells and the robot's microcontroller could be integrated to form hybrid decision architectures. We found that distinct decision architectures allow for us to develop models of computation with specific strengths such as runtime efficiency or minimal memory allocation. Taken together, our hybrid decision architectures provide a new strategy for developing bioinspired control systems that integrate both living and nonliving components.

  13. Feasibility study of an Integrated Program for Aerospace-vehicle Design (IPAD) system. Volume 4: Design of the IPAD system. Part 1: IPAD system design requirements, phase 1, task 2

    NASA Technical Reports Server (NTRS)

    Garrocq, C. A.; Hurley, M. J.

    1973-01-01

    System requirements, software elements, and hardware equipment required for an IPAD system are defined. An IPAD conceptual design was evolved, a potential user survey was conducted, and work loads for various types of interactive terminals were projected. Various features of major host computing systems were compared, and target systems were selected in order to identify the various elements of software required.

  14. Quantum Chemical Calculations Using Accelerators: Migrating Matrix Operations to the NVIDIA Kepler GPU and the Intel Xeon Phi.

    PubMed

    Leang, Sarom S; Rendell, Alistair P; Gordon, Mark S

    2014-03-11

    Increasingly, modern computer systems comprise a multicore general-purpose processor augmented with a number of special purpose devices or accelerators connected via an external interface such as a PCI bus. The NVIDIA Kepler Graphical Processing Unit (GPU) and the Intel Phi are two examples of such accelerators. Accelerators offer peak performances that can be well above those of the host processor. How to exploit this heterogeneous environment for legacy application codes is not, however, straightforward. This paper considers how matrix operations in typical quantum chemical calculations can be migrated to the GPU and Phi systems. Double precision general matrix multiply operations are endemic in electronic structure calculations, especially methods that include electron correlation, such as density functional theory, second order perturbation theory, and coupled cluster theory. The use of approaches that automatically determine whether to use the host or an accelerator, based on problem size, is explored, with computations that are occurring on the accelerator and/or the host. For data-transfers over PCI-e, the GPU provides the best overall performance for data sizes up to 4096 MB with consistent upload and download rates between 5-5.6 GB/s and 5.4-6.3 GB/s, respectively. The GPU outperforms the Phi for both square and nonsquare matrix multiplications.

  15. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  16. A compact control system to achieve stable voltage and low jitter trigger for repetitive intense electron-beam accelerator based on resonant charging

    NASA Astrophysics Data System (ADS)

    Qiu, Yongfeng; Liu, Jinliang; Yang, Jianhua; Cheng, Xinbing; Yang, Xiao

    2017-08-01

    A compact control system based on Delphi and Field Programmable Gate Array(FPGA) is developed for a repetitive intense electron-beam accelerator(IEBA), whose output power is 10GW and pulse duration is 160ns. The system uses both hardware and software solutions. It comprises a host computer, a communication module and a main control unit. A device independent applications programming interface, devised using Delphi, is installed on the host computer. Stability theory of voltage in repetitive mode is analyzed and a detailed overview of the hardware and software configuration is presented. High voltage experiment showed that the control system fulfilled the requests of remote operation and data-acquisition. The control system based on a time-sequence control method is used to keep constant of the voltage of the primary capacitor in every shot, which ensured the stable and reliable operation of the electron beam accelerator in the repetitive mode during the experiment. Compared with the former control system based on Labview and PIC micro-controller developed in our laboratory, the present one is more compact, and with higher precision in the time dimension. It is particularly useful for automatic control of IEBA in the high power microwave effects research experiments where pulse-to-pulse reproducibility is required.

  17. Fully integrated sub 100ps photon counting platform

    NASA Astrophysics Data System (ADS)

    Buckley, S. J.; Bellis, S. J.; Rosinger, P.; Jackson, J. C.

    2007-02-01

    Current state of the art high resolution counting modules, specifically designed for high timing resolution applications, are largely based on a computer card format. This has tended to result in a costly solution that is restricted to the computer it resides in. We describe a four channel timing module that interfaces to a computer via a USB port and operates with a resolution of less than 100 picoseconds. The core design of the system is an advanced field programmable gate array (FPGA) interfacing to a precision time interval measurement module, mass memory block and a high speed USB 2.0 serial data port. The FPGA design allows the module to operate in a number of modes allowing both continuous recording of photon events (time-tagging) and repetitive time binning. In time-tag mode the system reports, for each photon event, the high resolution time along with the chronological time (macro time) and the channel ID. The time-tags are uploaded in real time to a host computer via a high speed USB port allowing continuous storage to computer memory of up to 4 millions photons per second. In time-bin mode, binning is carried out with count rates up to 10 million photons per second. Each curve resides in a block of 128,000 time-bins each with a resolution programmable down to less than 100 picoseconds. Each bin has a limit of 65535 hits allowing autonomous curve recording until a bin reaches the maximum count or the system is commanded to halt. Due to the large memory storage, several curves/experiments can be stored in the system prior to uploading to the host computer for analysis. This makes this module ideal for integration into high timing resolution specific applications such as laser ranging and fluorescence lifetime imaging using techniques such as time correlated single photon counting (TCSPC).

  18. Remote observing with NASA's Deep Space Network

    NASA Astrophysics Data System (ADS)

    Kuiper, T. B. H.; Majid, W. A.; Martinez, S.; Garcia-Miro, C.; Rizzo, J. R.

    2012-09-01

    The Deep Space Network (DSN) communicates with spacecraft as far away as the boundary between the Solar System and the interstellar medium. To make this possible, large sensitive antennas at Canberra, Australia, Goldstone, California, and Madrid, Spain, provide for constant communication with interplanetary missions. We describe the procedures for radioastronomical observations using this network. Remote access to science monitor and control computers by authorized observers is provided by two-factor authentication through a gateway at the Jet Propulsion Laboratory (JPL) in Pasadena. To make such observations practical, we have devised schemes based on SSH tunnels and distributed computing. At the very minimum, one can use SSH tunnels and VNC (Virtual Network Computing, a remote desktop software suite) to control the science hosts within the DSN Flight Operations network. In this way we have controlled up to three telescopes simultaneously. However, X-window updates can be slow and there are issues involving incompatible screen sizes and multi-screen displays. Consequently, we are now developing SSH tunnel-based schemes in which instrument control and monitoring, and intense data processing, are done on-site by the remote DSN hosts while data manipulation and graphical display are done at the observer's host. We describe our approaches to various challenges, our experience with what worked well and lessons learned, and directions for future development.

  19. Computational prediction of virus-human protein-protein interactions using embedding kernelized heterogeneous data.

    PubMed

    Nourani, Esmaeil; Khunjush, Farshad; Durmuş, Saliha

    2016-05-24

    Pathogenic microorganisms exploit host cellular mechanisms and evade host defense mechanisms through molecular pathogen-host interactions (PHIs). Therefore, comprehensive analysis of these PHI networks should be an initial step for developing effective therapeutics against infectious diseases. Computational prediction of PHI data is gaining increasing demand because of scarcity of experimental data. Prediction of protein-protein interactions (PPIs) within PHI systems can be formulated as a classification problem, which requires the knowledge of non-interacting protein pairs. This is a restricting requirement since we lack datasets that report non-interacting protein pairs. In this study, we formulated the "computational prediction of PHI data" problem using kernel embedding of heterogeneous data. This eliminates the abovementioned requirement and enables us to predict new interactions without randomly labeling protein pairs as non-interacting. Domain-domain associations are used to filter the predicted results leading to 175 novel PHIs between 170 human proteins and 105 viral proteins. To compare our results with the state-of-the-art studies that use a binary classification formulation, we modified our settings to consider the same formulation. Detailed evaluations are conducted and our results provide more than 10 percent improvements for accuracy and AUC (area under the receiving operating curve) results in comparison with state-of-the-art methods.

  20. Dissecting innate immune responses with the tools of systems biology.

    PubMed

    Smith, Kelly D; Bolouri, Hamid

    2005-02-01

    Systems biology strives to derive accurate predictive descriptions of complex systems such as innate immunity. The innate immune system is essential for host defense, yet the resulting inflammatory response must be tightly regulated. Current understanding indicates that this system is controlled by complex regulatory networks, which maintain homoeostasis while accurately distinguishing pathogenic infections from harmless exposures. Recent studies have used high throughput technologies and computational techniques that presage predictive models and will be the foundation of a systems level understanding of innate immunity.

  1. Experimental system for computer network via satellite /CS/. III - Network control processor

    NASA Astrophysics Data System (ADS)

    Kakinuma, Y.; Ito, A.; Takahashi, H.; Uchida, K.; Matsumoto, K.; Mitsudome, H.

    1982-03-01

    A network control processor (NCP) has the functions of generating traffics, the control of links and the control of transmitting bursts. The NCP executes protocols, monitors of experiments, gathering and compiling data of measurements, of which programs are loaded on a minicomputer (MELCOM 70/40) with 512KB of memories. The NCP acts as traffic generators, instead of a host computer, in the experiment. For this purpose, 15 fake stations are realized by the software in each user station. This paper describes the configuration of the NCP and the implementation of the protocols for the experimental system.

  2. Ada Compiler Validation Summary Report: Certificate Number: 901212I1. 11120 Tartan Inc., Tartan Ada VMS/960MC Version 4.0 VAXstation 3100 = Intel ICE960/25 on an VMS 5.2 Intel EXV80960MC Board

    DTIC Science & Technology

    1991-01-09

    5.2 (Target), 90121211 .11120 6. AUTHOR( S ) IABG-AVFT IOttobrunn, Federal Republic of Germany 7 PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) N-1...FEDERAL REPUBLIC OF GERMANY 9 SPONSORINGMONITORING AGENCY NAME( S ) AND ADDRESS( ES) 10. SPONSORING/ONITORING AGENCY Ada Joint Program Office REPORT NUMBER...Ada implementacion for which validation status is realized. Host Computer A computer system where Ada source programs are transformec System into

  3. Protocol Interoperability Between DDN and ISO (Defense Data Network and International Organization for Standardization) Protocols

    DTIC Science & Technology

    1988-08-01

    Interconnection (OSI) in years. It is felt even more urgent in the past few years, with the rapid evolution of communication technologies and the...services and protocols above the transport layer are usually implemented as user- callable utilities on the host computers, it is desirable to offer them...Networks, Prentice-hall, New Jersey, 1987 [ BOND 87] Bond , John, "Parallel-Processing Concepts Finally Come together in Real Systems", Computer Design

  4. Enhancements to the Network Repair Level Analysis (NRLA) Model Using Marginal Analysis Techniques and Centralized Intermediate Repair Facility (CIRF) Maintenance Concepts.

    DTIC Science & Technology

    1983-12-01

    while at the same time improving its operational efficiency. Through their integration and use, System Program Managers have a comprehensive analytical... systems . The NRLA program is hosted on the CREATE Operating System and contains approxiamately 5500 lines of computer code. It consists of a main...associated with C alternative maintenance plans. As the technological complexity of weapons systems has increased new and innovative logisitcal support

  5. An online model composition tool for system biology models

    PubMed Central

    2013-01-01

    Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914

  6. A Google Glass navigation system for ultrasound and fluorescence dual-mode image-guided surgery

    NASA Astrophysics Data System (ADS)

    Zhang, Zeshu; Pei, Jing; Wang, Dong; Hu, Chuanzhen; Ye, Jian; Gan, Qi; Liu, Peng; Yue, Jian; Wang, Benzhong; Shao, Pengfei; Povoski, Stephen P.; Martin, Edward W.; Yilmaz, Alper; Tweedle, Michael F.; Xu, Ronald X.

    2016-03-01

    Surgical resection remains the primary curative intervention for cancer treatment. However, the occurrence of a residual tumor after resection is very common, leading to the recurrence of the disease and the need for re-resection. We develop a surgical Google Glass navigation system that combines near infrared fluorescent imaging and ultrasonography for intraoperative detection of sites of tumor and assessment of surgical resection boundaries, well as for guiding sentinel lymph node (SLN) mapping and biopsy. The system consists of a monochromatic CCD camera, a computer, a Google Glass wearable headset, an ultrasonic machine and an array of LED light sources. All the above components, except the Google Glass, are connected to a host computer by a USB or HDMI port. Wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A control program is written in C++ to call OpenCV functions for image calibration, processing and display. The technical feasibility of the system is tested in both tumor simulating phantoms and in a human subject. When the system is used for simulated phantom resection tasks, the tumor boundaries, invisible to the naked eye, can be clearly visualized with the surgical Google Glass navigation system. This system has also been used in an IRB approved protocol in a single patient during SLN mapping and biopsy in the First Affiliated Hospital of Anhui Medical University, demonstrating the ability to successfully localize and resect all apparent SLNs. In summary, our tumor simulating phantom and human subject studies have demonstrated the technical feasibility of successfully using the proposed goggle navigation system during cancer surgery.

  7. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  8. The Defense Message System and the U.S. Coast Guard

    DTIC Science & Technology

    1992-06-01

    these mail services, the Internet also provides a File Transfer Protocol (FTP) and remote login between host computers (TELNET) capabilities. 17 [Ref...the Joint Maritime Intelligence Element (JMIE), Zincdust, and Emerald . [Ref. 27] 4. Secure Data Network The Coast Guard’s Secure Data Network (SDN

  9. Predictive Anomaly Management for Resilient Virtualized Computing Infrastructures

    DTIC Science & Technology

    2015-05-27

    PREC: Practical Root Exploit Containment for Android Devices, ACM Conference on Data and Application Security and Privacy (CODASPY) . 03-MAR-14...05-OCT-11, . : , Hiep Nguyen, Yongmin Tan, Xiaohui Gu. Propagation-aware Anomaly Localization for Cloud Hosted Distributed Applications , ACM...Workshop on Managing Large-Scale Systems via the Analysis of System Logs and the Application of Machine Learning Techniques (SLAML) in conjunction with SOSP

  10. Integrating Network Management for Cloud Computing Services

    DTIC Science & Technology

    2015-06-01

    abstraction and system design. In this dissertation, we make three major contributions. We rst propose to consolidate the tra c and infrastructure management...abstraction and system design. In this dissertation, we make three major contributions. We first propose to consolidate the traffic and infrastructure ...1.3.1 Safe Datacenter Traffic/ Infrastructure Management . . . . . . 9 1.3.2 End-host/Network Cooperative Traffic Management . . . . . . 10 1.3.3 Direct

  11. A High Performance Micro Channel Interface for Real-Time Industrial Image Processing

    Treesearch

    Thomas H. Drayer; Joseph G. Tront; Richard W. Conners

    1995-01-01

    Data collection and transfer devices are critical to the performance of any machine vision system. The interface described in this paper collects image data from a color line scan camera and transfers the data obtained into the system memory of a Micro Channel-based host computer. A maximum data transfer rate of 20 Mbytes/sec can be achieved using the DMA capabilities...

  12. Realization of Intelligent Measurement and Control System for Limb Rehabilitation Based on PLC and Touch Screen

    NASA Astrophysics Data System (ADS)

    Liu, Xiangquan

    According to the treatment needs of patients with limb movement disorder, on the basis of the limb rehabilitative training prototype, function of measure and control system are analyzed, design of system hardware and software is completed. The touch screen which is adopt as host computer and man-machine interaction window is responsible for sending commands and training information display; The PLC which is adopt as slave computer is responsible for receiving control command from touch screen, collecting the sensor data, regulating torque and speed of motor by analog output according to the different training mode, realizing ultimately active and passive training for limb rehabilitation therapy.

  13. A vector-product information retrieval system adapted to heterogeneous, distributed computing environments

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1991-01-01

    Vector-product information retrieval (IR) systems produce retrieval results superior to all other searching methods but presently have no commercial implementations beyond the personal computer environment. The NASA Electronic Library Systems (NELS) provides a ranked list of the most likely relevant objects in collections in response to a natural language query. Additionally, the system is constructed using standards and tools (Unix, X-Windows, Notif, and TCP/IP) that permit its operation in organizations that possess many different hosts, workstations, and platforms. There are no known commercial equivalents to this product at this time. The product has applications in all corporate management environments, particularly those that are information intensive, such as finance, manufacturing, biotechnology, and research and development.

  14. Cloud hosting of the IPython Notebook to Provide Collaborative Research Environments for Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Gomez-Dans, Jose; Holt, John

    2015-04-01

    We explore how the popular IPython Notebook computing system can be hosted on a cloud platform to provide a flexible virtual research hosting environment for Earth Observation data processing and analysis and how this approach can be expanded more broadly into a generic SaaS (Software as a Service) offering for the environmental sciences. OPTIRAD (OPTImisation environment for joint retrieval of multi-sensor RADiances) is a project funded by the European Space Agency to develop a collaborative research environment for Data Assimilation of Earth Observation products for land surface applications. Data Assimilation provides a powerful means to combine multiple sources of data and derive new products for this application domain. To be most effective, it requires close collaboration between specialists in this field, land surface modellers and end users of data generated. A goal of OPTIRAD then is to develop a collaborative research environment to engender shared working. Another significant challenge is that of data volume and complexity. Study of land surface requires high spatial and temporal resolutions, a relatively large number of variables and the application of algorithms which are computationally expensive. These problems can be addressed with the application of parallel processing techniques on specialist compute clusters. However, scientific users are often deterred by the time investment required to port their codes to these environments. Even when successfully achieved, it may be difficult to readily change or update. This runs counter to the scientific process of continuous experimentation, analysis and validation. The IPython Notebook provides users with a web-based interface to multiple interactive shells for the Python programming language. Code, documentation and graphical content can be saved and shared making it directly applicable to OPTIRAD's requirements for a shared working environment. Given the web interface it can be readily made into a hosted service with Wakari and Microsoft Azure being notable examples. Cloud-hosting of the Notebook allows the same familiar Python interface to be retained but backed by Cloud Computing attributes of scalability, elasticity and resource pooling. This combination makes it a powerful solution to address the needs of long-tail science users of Big Data: an intuitive interactive interface with which to access powerful compute resources. IPython Notebook can be hosted as a single user desktop environment but the recent development by the IPython community of JupyterHub enables it to be run as a multi-user hosting environment. In addition, IPython.parallel allows the exposition of parallel compute infrastructure through a Python interface. Applying these technologies in combination, a collaborative research environment has been developed for OPTIRAD on the UK JASMIN/CEMS facility's private cloud (http://jasmin.ac.uk). Based on this experience, a generic virtualised solution is under development suitable for use by the wider environmental science community - on both JASMIN and portable to third party cloud platforms.

  15. Ontology-based representation and analysis of host-Brucella interactions.

    PubMed

    Lin, Yu; Xiang, Zuoshuang; He, Yongqun

    2015-01-01

    Biomedical ontologies are representations of classes of entities in the biomedical domain and how these classes are related in computer- and human-interpretable formats. Ontologies support data standardization and exchange and provide a basis for computer-assisted automated reasoning. IDOBRU is an ontology in the domain of Brucella and brucellosis. Brucella is a Gram-negative intracellular bacterium that causes brucellosis, the most common zoonotic disease in the world. In this study, IDOBRU is used as a platform to model and analyze how the hosts, especially host macrophages, interact with virulent Brucella strains or live attenuated Brucella vaccine strains. Such a study allows us to better integrate and understand intricate Brucella pathogenesis and host immunity mechanisms. Different levels of host-Brucella interactions based on different host cell types and Brucella strains were first defined ontologically. Three important processes of virulent Brucella interacting with host macrophages were represented: Brucella entry into macrophage, intracellular trafficking, and intracellular replication. Two Brucella pathogenesis mechanisms were ontologically represented: Brucella Type IV secretion system that supports intracellular trafficking and replication, and Brucella erythritol metabolism that participates in Brucella intracellular survival and pathogenesis. The host cell death pathway is critical to the outcome of host-Brucella interactions. For better survival and replication, virulent Brucella prevents macrophage cell death. However, live attenuated B. abortus vaccine strain RB51 induces caspase-2-mediated proinflammatory cell death. Brucella-associated cell death processes are represented in IDOBRU. The gene and protein information of 432 manually annotated Brucella virulence factors were represented using the Ontology of Genes and Genomes (OGG) and Protein Ontology (PRO), respectively. Seven inference rules were defined to capture the knowledge of host-Brucella interactions and implemented in IDOBRU. Current IDOBRU includes 3611 ontology terms. SPARQL queries identified many results that are critical to the host-Brucella interactions. For example, out of 269 protein virulence factors related to macrophage-Brucella interactions, 81 are critical to Brucella intracellular replication inside macrophages. A SPARQL query also identified 11 biological processes important for Brucella virulence. To systematically represent and analyze fundamental host-pathogen interaction mechanisms, we provided for the first time comprehensive ontological modeling of host-pathogen interactions using Brucella as the pathogen model. The methods and ontology representations used in our study are generic and can be broadened to study the interactions between hosts and other pathogens.

  16. Defense Acquisitions Acronyms and Terms

    DTIC Science & Technology

    2012-12-01

    Computer-Aided Design CADD Computer-Aided Design and Drafting CAE Component Acquisition Executive; Computer-Aided Engineering CAIV Cost As an...Radiation to Ordnance HFE Human Factors Engineering HHA Health Hazard Assessment HNA Host-Nation Approval HNS Host-Nation Support HOL High -Order...Engineering Change Proposal VHSIC Very High Speed Integrated Circuit VLSI Very Large Scale Integration VOC Volatile Organic Compound W WAN Wide

  17. Lowering the Barrier for Standards-Compliant and Discoverable Hydrological Data Publication

    NASA Astrophysics Data System (ADS)

    Kadlec, J.

    2013-12-01

    The growing need for sharing and integration of hydrological and climate data across multiple organizations has resulted in the development of distributed, services-based, standards-compliant hydrological data management and data hosting systems. The problem with these systems is complicated set-up and deployment. Many existing systems assume that the data publisher has remote-desktop access to a locally managed server and experience with computer network setup. For corporate websites, shared web hosting services with limited root access provide an inexpensive, dynamic web presence solution using the Linux, Apache, MySQL and PHP (LAMP) software stack. In this paper, we hypothesize that a webhosting service provides an optimal, low-cost solution for hydrological data hosting. We propose a software architecture of a standards-compliant, lightweight and easy-to-deploy hydrological data management system that can be deployed on the majority of existing shared internet webhosting services. The architecture and design is validated by developing Hydroserver Lite: a PHP and MySQL-based hydrological data hosting package that is fully standards-compliant and compatible with the Consortium of Universities for Advancement of Hydrologic Sciences (CUAHSI) hydrologic information system. It is already being used for management of field data collection by students of the McCall Outdoor Science School in Idaho. For testing, the Hydroserver Lite software has been installed on multiple different free and low-cost webhosting sites including Godaddy, Bluehost and 000webhost. The number of steps required to set-up the server is compared with the number of steps required to set-up other standards-compliant hydrologic data hosting systems including THREDDS, IstSOS and MapServer SOS.

  18. GATEWAY - COMMUNICATIONS GATEWAY SOFTWARE FOR NETEX, DECNET, AND TCP/IP

    NASA Technical Reports Server (NTRS)

    Keith, B.

    1994-01-01

    The Communications Gateway Software, GATEWAY, provides process-to-process communication between remote applications programs in different protocol domains. Communicating peer processes may be resident on any paired combination of NETEX, DECnet, or TCP/IP hosts. The gateway provides the necessary mapping from one protocol to another and will facilitate practical intermachine communications in a cost effective manner by eliminating the need to standardize on a single protocol or the need to implement multiple protocols in the host computers. The purpose of the gateway is to support data transfers between application programs on different host computers using different protocols. The gateway computer must be physically connected to both host computers and must contain the system software needed to use the communication protocols of both host computers. The communication process between application partners can be divided into three phases: session establishment, data transfer, and session termination. The communication protocols supported by GATEWAY (DECnet, NETEX, and TCP/IP) have addressing mechanisms that allow an application to identify itself and distinguish among other applications on the network. The exact form of the address varies depending on whether an application is passively offering (awaiting the receipt of a network connection from another network application) or actively connecting to another network. When the gateway is started, GATEWAY reads a file of address pairs. One of the address pairs is used by GATEWAY for passively offering on one network while the other address in the pair is used for actively connecting on the other network establishing the session. Now the two application partners can send and receive data in a manner appropriate to their home networks. GATEWAY accommodates full duplex transmissions. Thus, if the application partners are sophisticated enough, they can send and receive simultaneously. GATEWAY also keeps track of the number of bytes contained in each ransferred data packet. If GATEWAY detects an error during the data transfer, the sessions on both networks are terminated and the passive offer on the appropriate network is reissued. After performing the desired data transfer, one of the remote applications will send a network disconnect to the gateway to close its communication link. Upon detecting this network disconnect, GATEWAY replies with its own disconnect to ensure that the network connection has been fully terminated. Then, GATEWAY terminates its session with the other application by closing the communication link. GATEWAY has been implemented on a DEC VAX under VMS 4.7. It is written in ADA and has a central memory requirement of approximately 406K bytes. The communications protocols supported by GATEWAY are Network Systems Corporation's Network Executive (NETEX), Excelan's TCP/IP, and DECnet. GATEWAY was developed in 1988.

  19. Aligning Microtomography Analysis with Traditional Anatomy for a 3D Understanding of the Host-Parasite Interface – Phoradendron spp. Case Study

    PubMed Central

    Teixeira-Costa, Luíza; Ceccantini, Gregório C. T.

    2016-01-01

    The complex endophytic structure formed by parasitic plant species often represents a challenge in the study of the host-parasite interface. Even with the large amounts of anatomical slides, a three-dimensional comprehension of the structure may still be difficult to obtain. In the present study we applied the High Resolution X-ray Computed Tomography (HRXCT) analysis along with usual plant anatomy techniques in order to compare the infestation pattern of two mistletoe species of the genus Phoradendron. Additionally, we tested the use of contrasting solutions in order to improve the detection of the parasite’s endophytic tissue. To our knowledge, this is the first study to show the three-dimensional structure of host-mistletoe interface by using HRXCT technique. Results showed that Phoradendron perrottetii growing on the host Tapirira guianensis forms small woody galls with a restricted endophytic system. The sinkers were short and eventually grouped creating a continuous interface with the host wood. On the other hand, the long sinkers of P. bathyoryctum penetrate deeply into the wood of Cedrela fissilis branching in all directions throughout the woody gall area, forming a spread-out infestation pattern. The results indicate that the HRXCT is indeed a powerful approach to understand the endophytic system of parasitic plants. The combination of three-dimensional models of the infestation with anatomical analysis provided a broader understanding of the host-parasite connection. Unique anatomic features are reported for the sinkes of P. perrottetii, while the endophytic tissue of P. bathyoryctum conformed to general anatomy observed for other species of this genus. These differences are hypothesized to be related to the three-dimensional structure of each endophytic system and the communication stablished with the host. PMID:27630661

  20. Final Report: A Broad Research Project on the Sciences of Complexity, September 15, 1994 - November 15, 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2000-02-01

    DOE support for a broad research program in the sciences of complexity permitted the Santa Fe Institute to initiate new collaborative research within its integrative core activities as well as to host visitors to participate in research on specific topics that serve as motivation and testing ground for the study of the general principles of complex systems. Results are presented on computational biology, biodiversity and ecosystem research, and advanced computing and simulation.

  1. Phenomenology tools on cloud infrastructures using OpenStack

    NASA Astrophysics Data System (ADS)

    Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.

    2013-04-01

    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.

  2. A metabolic network approach for the identification and prioritization of antimicrobial drug targets

    PubMed Central

    Chavali, Arvind K.; D’Auria, Kevin M.; Hewlett, Erik L.; Pearson, Richard D.; Papin, Jason A.

    2012-01-01

    For many infectious diseases, novel treatment options are needed to address problems with cost, toxicity and resistance to current drugs. Systems biology tools can be used to gain valuable insight into pathogenic processes and aid in expediting drug discovery. In the past decade, constraint-based modeling of genome-scale metabolic networks has become widely used. Focusing on pathogen metabolic networks, we review in silico strategies to identify effective drug targets, and we highlight recent successes as well as limitations associated with such computational analyses. We further discuss how accounting for the host environment and even targeting the host may offer new therapeutic options. These systems-level approaches are beginning to provide novel avenues for drug targeting against infectious agents. PMID:22300758

  3. Charliecloud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Tim

    Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less

  4. Cloud Computing and Its Applications in GIS

    NASA Astrophysics Data System (ADS)

    Kang, Cao

    2011-12-01

    Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)

  5. Dynamic gas temperature measurement system. Volume 2: Operation and program manual

    NASA Technical Reports Server (NTRS)

    Purpura, P. T.

    1983-01-01

    The hot section technology (HOST) dynamic gas temperature measurement system computer program acquires data from two type B thermocouples of different diameters. The analysis method determines the in situ value of an aerodynamic parameter T, containing the heat transfer coefficient from the transfer function of the two thermocouples. This aerodynamic parameter is used to compute a fequency response spectrum and compensate the dynamic portion of the signal of the smaller thermocouple. The calculations for the aerodynamic parameter and the data compensation technique are discussed. Compensated data are presented in either the time or frequency domain, time domain data as dynamic temperature vs time, or frequency domain data.

  6. AdA Compiler Validation Summary Report: Certificate Number: 910920W1. 11211 Verdix Corporation, VADS Sun4 SunOS= 68020/30 ARTX, VAda-110-40120, Version 6.0, SPAECstation 2 (Host) to Motorola MVME147 (Target)

    DTIC Science & Technology

    1991-09-20

    SunOS Release 4.1.1) Target Computer System: Motorola MVME147 (Motorola 68030 Bare Board) Customer Agreement Number: 91-07-16- VRX See section 3.1 for...AVF-VSR-504.0292 18 February 1992 91-07-1 6- VRX Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 910920W1.11211 VERDIX Corporation VADS...SunOS Release 4.1.1) Target Computer System: Motorola MVME147 (Motorola 68030 Bare Board) Customer Agreement Number: 91-07-16- VRX See section 3.1 for

  7. Robust Operation of Tendon-Driven Robot Fingers Using Force and Position-Based Control Laws

    NASA Technical Reports Server (NTRS)

    Hargrave, Brian (Inventor); Abdallah, Muhammad E (Inventor); Reiland, Matthew J (Inventor); Diftler, Myron A (Inventor); Strawser, Philip A (Inventor); Platt, Jr., Robert J. (Inventor); Ihrke, Chris A. (Inventor)

    2013-01-01

    A robotic system includes a tendon-driven finger and a control system. The system controls the finger via a force-based control law when a tension sensor is available, and via a position-based control law when a sensor is not available. Multiple tendons may each have a corresponding sensor. The system selectively injects a compliance value into the position-based control law when only some sensors are available. A control system includes a host machine and a non-transitory computer-readable medium having a control process, which is executed by the host machine to control the finger via the force- or position-based control law. A method for controlling the finger includes determining the availability of a tension sensor(s), and selectively controlling the finger, using the control system, via the force or position-based control law. The position control law allows the control system to resist disturbances while nominally maintaining the initial state of internal tendon tensions.

  8. Interface of the transport systems research vehicle monochrome display system to the digital autonomous terminal access communication data bus

    NASA Technical Reports Server (NTRS)

    Easley, W. C.; Tanguy, J. S.

    1986-01-01

    An upgrade of the transport systems research vehicle (TSRV) experimental flight system retained the original monochrome display system. The original host computer was replaced with a Norden 11/70, a new digital autonomous terminal access communication (DATAC) data bus was installed for data transfer between display system and host, while a new data interface method was required. The new display data interface uses four split phase bipolar (SPBP) serial busses. The DATAC bus uses a shared interface ram (SIR) for intermediate storage of its data transfer. A display interface unit (DIU) was designed and configured to read from and write to the SIR to properly convert the data from parallel to SPBP serial and vice versa. It is found that separation of data for use by each SPBP bus and synchronization of data tranfer throughout the entire experimental flight system are major problems which require solution in DIU design. The techniques used to accomplish these new data interface requirements are described.

  9. JPRS Report, Science & Technology, China, High-Performance Computer Systems

    DTIC Science & Technology

    1992-10-28

    microprocessor array The microprocessor array in the AP85 system is com- posed of 16 completely identical array element micro - processors . Each array element...microprocessors and capable of host machine reading and writing. The memory capacity of the array element micro - processors as a whole can be expanded...transmission functions to carry out data transmission from array element micro - processor to array element microprocessor, from array element

  10. The Philip Morris Information Network: A Library Database on an In-House Timesharing System.

    ERIC Educational Resources Information Center

    DeBardeleben, Marian Z.; And Others

    1983-01-01

    Outlines a database constructed at Philip Morris Research Center Library which encompasses holdings and circulation and acquisitions records for all items in the library. Host computer (DECSYSTEM-2060), software (BASIC), database design, search methodology, cataloging, and accessibility are noted; sample search, circ-in profile, end user profiles,…

  11. Future directions in flight simulation: A user perspective

    NASA Technical Reports Server (NTRS)

    Jackson, Bruce

    1993-01-01

    Langley Research Center was an early leader in simulation technology, including a special emphasis in space vehicle simulations such as the rendezvous and docking simulator for the Gemini program and the lunar landing simulator used before Apollo. In more recent times, Langley operated the first synergistic six degree of freedom motion platform (the Visual Motion Simulator, or VMS) and developed the first dual-dome air combat simulator, the Differential Maneuvering Simulator (DMS). Each Langley simulator was developed more or less independently from one another with different programming support. At present time, the various simulation cockpits, while supported by the same host computer system, run dissimilar software. The majority of recent investments in Langley's simulation facilities have been hardware procurements: host processors, visual systems, and most recently, an improved motion system. Investments in software improvements, however, have not been of the same order.

  12. The Fermilab Accelerator control system

    NASA Astrophysics Data System (ADS)

    Bogert, Dixon

    1986-06-01

    With the advent of the Tevatron, considerable upgrades have been made to the controls of all the Fermilab Accelerators. The current system is based on making as large an amount of data as possible available to many operators or end-users. Specifically there are about 100 000 separate readings, settings, and status and control registers in the various machines, all of which can be accessed by seventeen consoles, some in the Main Control Room and others distributed throughout the complex. A "Host" computer network of approximately eighteen PDP-11/34's, seven PDP-11/44's, and three VAX-11/785's supports a distributed data acquisition system including Lockheed MAC-16's left from the original Main Ring and Booster instrumentation and upwards of 1000 Z80, Z8002, and M68000 microprocessors in dozens of configurations. Interaction of the various parts of the system is via a central data base stored on the disk of one of the VAXes. The primary computer-hardware communication is via CAMAC for the new Tevatron and Antiproton Source; certain subsystems, among them vacuum, refrigeration, and quench protection, reside in the distributed microprocessors and communicate via GAS, an in-house protocol. An important hardware feature is an accurate clock system making a large number of encoded "events" in the accelerator supercycle available for both hardware modules and computers. System software features include the ability to save the current state of the machine or any subsystem and later restore it or compare it with the state at another time, a general logging facility to keep track of specific variables over long periods of time, detection of "exception conditions" and the posting of alarms, and a central filesharing capability in which files on VAX disks are available for access by any of the "Host" processors.

  13. Implementation of a High-Speed FPGA and DSP Based FFT Processor for Improving Strain Demodulation Performance in a Fiber-Optic-Based Sensing System

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2005-01-01

    NASA's Aviation Safety and Security Program is pursuing research in on-board Structural Health Management (SHM) technologies for purposes of reducing or eliminating aircraft accidents due to system and component failures. Under this program, NASA Langley Research Center (LaRC) is developing a strain-based structural health-monitoring concept that incorporates a fiber optic-based measuring system for acquiring strain values. This fiber optic-based measuring system provides for the distribution of thousands of strain sensors embedded in a network of fiber optic cables. The resolution of strain value at each discrete sensor point requires a computationally demanding data reduction software process that, when hosted on a conventional processor, is not suitable for near real-time measurement. This report describes the development and integration of an alternative computing environment using dedicated computing hardware for performing the data reduction. Performance comparison between the existing and the hardware-based system is presented.

  14. Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel Adrian; Oprisan, Ana

    2005-03-01

    Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells — EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.

  15. ePMV embeds molecular modeling into professional animation software environments.

    PubMed

    Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J

    2011-03-09

    Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. ePMV Embeds Molecular Modeling into Professional Animation Software Environments

    PubMed Central

    Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.

    2011-01-01

    SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181

  17. MEDLINE: the options for health professionals.

    PubMed

    Wood, E H

    1994-01-01

    The bibliographic database MEDLINE, produced by the National Library of Medicine (NLM), is a computerized index to the world's biomedical literature. The database can be searched back to 1966 and contains 6.8 million records. The various means of access are divided, for the purposes of this article, into three categories: logging onto a remote host computer by telephone and modem or by the Internet; subscribing to part or all of the database on compact disc (CD-ROM); and leasing the data on a transport medium such as magnetic tape or CDs for loading on a local host computer. Decisions about which method is preferable in a given situation depend on cost, availability of hardware and software, local expertise, and the size of the intended user population. Trends include increased access to the Internet by health professionals, increased network speed, links from MEDLINE records to full-text databases or online journals, and integration of MEDLINE into wider health information systems.

  18. Metagenomic systems biology of the human gut microbiome reveals topological shifts associated with obesity and inflammatory bowel disease.

    PubMed

    Greenblum, Sharon; Turnbaugh, Peter J; Borenstein, Elhanan

    2012-01-10

    The human microbiome plays a key role in a wide range of host-related processes and has a profound effect on human health. Comparative analyses of the human microbiome have revealed substantial variation in species and gene composition associated with a variety of disease states but may fall short of providing a comprehensive understanding of the impact of this variation on the community and on the host. Here, we introduce a metagenomic systems biology computational framework, integrating metagenomic data with an in silico systems-level analysis of metabolic networks. Focusing on the gut microbiome, we analyze fecal metagenomic data from 124 unrelated individuals, as well as six monozygotic twin pairs and their mothers, and generate community-level metabolic networks of the microbiome. Placing variations in gene abundance in the context of these networks, we identify both gene-level and network-level topological differences associated with obesity and inflammatory bowel disease (IBD). We show that genes associated with either of these host states tend to be located at the periphery of the metabolic network and are enriched for topologically derived metabolic "inputs." These findings may indicate that lean and obese microbiomes differ primarily in their interface with the host and in the way they interact with host metabolism. We further demonstrate that obese microbiomes are less modular, a hallmark of adaptation to low-diversity environments. We additionally link these topological variations to community species composition. The system-level approach presented here lays the foundation for a unique framework for studying the human microbiome, its organization, and its impact on human health.

  19. eTRIKS platform: Conception and operation of a highly scalable cloud-based platform for translational research and applications development.

    PubMed

    Bussery, Justin; Denis, Leslie-Alexandre; Guillon, Benjamin; Liu, Pengfeï; Marchetti, Gino; Rahal, Ghita

    2018-04-01

    We describe the genesis, design and evolution of a computing platform designed and built to improve the success rate of biomedical translational research. The eTRIKS project platform was developed with the aim of building a platform that can securely host heterogeneous types of data and provide an optimal environment to run tranSMART analytical applications. Many types of data can now be hosted, including multi-OMICS data, preclinical laboratory data and clinical information, including longitudinal data sets. During the last two years, the platform has matured into a robust translational research knowledge management system that is able to host other data mining applications and support the development of new analytical tools. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. A software control system for the ACTS high-burst-rate link evaluation terminal

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Daugherty, Elaine S.

    1991-01-01

    Control and performance monitoring of NASA's High Burst Rate Link Evaluation Terminal (HBR-LET) is accomplished by using several software control modules. Different software modules are responsible for controlling remote radio frequency (RF) instrumentation, supporting communication between a host and a remote computer, controlling the output power of the Link Evaluation Terminal and data display. Remote commanding of microwave RF instrumentation and the LET digital ground terminal allows computer control of various experiments, including bit error rate measurements. Computer communication allows system operators to transmit and receive from the Advanced Communications Technology Satellite (ACTS). Finally, the output power control software dynamically controls the uplink output power of the terminal to compensate for signal loss due to rain fade. Included is a discussion of each software module and its applications.

  1. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  2. Fault-tolerant building-block computer study

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    Ultra-reliable core computers are required for improving the reliability of complex military systems. Such computers can provide reliable fault diagnosis, failure circumvention, and, in some cases serve as an automated repairman for their host systems. A small set of building-block circuits which can be implemented as single very large integration devices, and which can be used with off-the-shelf microprocessors and memories to build self checking computer modules (SCCM) is described. Each SCCM is a microcomputer which is capable of detecting its own faults during normal operation and is described to communicate with other identical modules over one or more Mil Standard 1553A buses. Several SCCMs can be connected into a network with backup spares to provide fault-tolerant operation, i.e. automated recovery from faults. Alternative fault-tolerant SCCM configurations are discussed along with the cost and reliability associated with their implementation.

  3. Quantum Testbeds Stakeholder Workshop (QTSW) Report meeting purpose and agenda.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hebner, Gregory A.

    Quantum computing (QC) is a promising early-stage technology with the potential to provide scientific computing capabilities far beyond what is possible with even an Exascale computer in specific problems of relevance to the Office of Science. These include (but are not limited to) materials modeling, molecular dynamics, and quantum chromodynamics. However, commercial QC systems are not yet available and the technical maturity of current QC hardware, software, algorithms, and systems integration is woefully incomplete. Thus, there is a significant opportunity for DOE to define the technology building blocks, and solve the system integration issues to enable a revolutionary tool. Oncemore » realized, QC will have world changing impact on economic competitiveness, the scientific enterprise, and citizen well-being. Prior to this workshop, DOE / Office of Advanced Scientific Computing Research (ASCR) hosted a workshop in 2015 to explore QC scientific applications. The goal of that workshop was to assess the viability of QC technologies to meet the computational requirements in support of DOE’s science and energy mission and to identify the potential impact of these technologies.« less

  4. Electronic structure and magnetic properties of dilute U impurities in metals

    NASA Astrophysics Data System (ADS)

    Mohanta, S. K.; Cottenier, S.; Mishra, S. N.

    2016-05-01

    The electronic structure and magnetic moment of dilute U impurity in metallic hosts have been calculated from first principles. The calculations have been performed within local density approximation of the density functional theory using Augmented plane wave+local orbital (APW+lo) technique, taking account of spin-orbit coupling and Coulomb correlation through LDA+U approach. We present here our results for the local density of states, magnetic moment and hyperfine field calculated for an isolated U impurity embedded in hosts with sp-, d- and f-type conduction electrons. The results of our systematic study provide a comprehensive insight on the pressure dependence of 5f local magnetism in metallic systems. The unpolarized local density of states (LDOS), analyzed within the frame work of Stoner model suggest the occurrence of local moment for U in sp-elements, noble metals and f-block hosts like La, Ce, Lu and Th. In contrast, U is predicted to be nonmagnetic in most transition metal hosts except in Sc, Ti, Y, Zr, and Hf consistent with the results obtained from spin polarized calculation. The spin and orbital magnetic moments of U computed within the frame of LDA+U formalism show a scaling behavior with lattice compression. We have also computed the spin and orbital hyperfine fields and a detail analysis has been carried out. The host dependent trends for the magnetic moment, hyperfine field and 5f occupation reflect pressure induced change of electronic structure with U valency changing from 3+ to 4+ under lattice compression. In addition, we have made a detailed analysis of the impurity induced host spin polarization suggesting qualitatively different roles of f-band electrons on moment stability. The results presented in this work would be helpful towards understanding magnetism and spin fluctuation in U based alloys.

  5. Genome-scale identification of Legionella pneumophila effectors using a machine learning approach.

    PubMed

    Burstein, David; Zusman, Tal; Degtyar, Elena; Viner, Ram; Segal, Gil; Pupko, Tal

    2009-07-01

    A large number of highly pathogenic bacteria utilize secretion systems to translocate effector proteins into host cells. Using these effectors, the bacteria subvert host cell processes during infection. Legionella pneumophila translocates effectors via the Icm/Dot type-IV secretion system and to date, approximately 100 effectors have been identified by various experimental and computational techniques. Effector identification is a critical first step towards the understanding of the pathogenesis system in L. pneumophila as well as in other bacterial pathogens. Here, we formulate the task of effector identification as a classification problem: each L. pneumophila open reading frame (ORF) was classified as either effector or not. We computationally defined a set of features that best distinguish effectors from non-effectors. These features cover a wide range of characteristics including taxonomical dispersion, regulatory data, genomic organization, similarity to eukaryotic proteomes and more. Machine learning algorithms utilizing these features were then applied to classify all the ORFs within the L. pneumophila genome. Using this approach we were able to predict and experimentally validate 40 new effectors, reaching a success rate of above 90%. Increasing the number of validated effectors to around 140, we were able to gain novel insights into their characteristics. Effectors were found to have low G+C content, supporting the hypothesis that a large number of effectors originate via horizontal gene transfer, probably from their protozoan host. In addition, effectors were found to cluster in specific genomic regions. Finally, we were able to provide a novel description of the C-terminal translocation signal required for effector translocation by the Icm/Dot secretion system. To conclude, we have discovered 40 novel L. pneumophila effectors, predicted over a hundred additional highly probable effectors, and shown the applicability of machine learning algorithms for the identification and characterization of bacterial pathogenesis determinants.

  6. Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection

    PubMed Central

    Jones, Douglas E.; Dorman, Karin S.

    2009-01-01

    Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088

  7. [Study for portable dynamic ECG monitor and recorder].

    PubMed

    Yang, Pengcheng; Li, Yongqin; Chen, Bihua

    2012-09-01

    This Paper presents a portable dynamic ECG monitor system based on MSP430F149 microcontroller. The electrocardiogram detecting system consists of ECG detecting circuit, man-machine interaction module, MSP430F149 and upper computer software. The ECG detecting circuit including a preamplifier, second-order Butterworth low-pass filter, high-pass filter, and 50Hz trap circuit to detects electrocardiogram and depresses various kinds of interference effectively. A microcontroller is used to collect three channel analog signals which can be displayed on TFT LCD. A SD card is used to record real-time data continuously and implement the FTA16 file system. In the end, a host computer system interface is also designed to analyze the ECG signal and the analysis results can provide diagnosis references to clinical doctors.

  8. Computer modelling of the optical behaviour of rare earth dopants in BaY2F8

    NASA Astrophysics Data System (ADS)

    Jackson, R. A.; Valerio, M. E. G.; Couto Dos Santos, M. A.; Amaral, J. B.

    2005-01-01

    BaY2F8, when doped with rare earth elements is a material of interest in the development of solid-state laser systems, especially for use in the infrared region. This paper presents the application of a new computational technique, which combines atomistic modelling and crystal field calculations in a study of rare earth doping of the material. Atomistic modelling is used to calculate the symmetry and detailed geometry of the dopant ion-host lattice system, and this information is then used to calculate the crystal field parameters, which are an important indicator in assessing the optical behaviour of the dopant-crystal system. Comparisons with the results of recent experimental work on this material are made.

  9. Interactive analysis of geographically distributed population imaging data collections over light-path data networks

    NASA Astrophysics Data System (ADS)

    van Lew, Baldur; Botha, Charl P.; Milles, Julien R.; Vrooman, Henri A.; van de Giessen, Martijn; Lelieveldt, Boudewijn P. F.

    2015-03-01

    The cohort size required in epidemiological imaging genetics studies often mandates the pooling of data from multiple hospitals. Patient data, however, is subject to strict privacy protection regimes, and physical data storage may be legally restricted to a hospital network. To enable biomarker discovery, fast data access and interactive data exploration must be combined with high-performance computing resources, while respecting privacy regulations. We present a system using fast and inherently secure light-paths to access distributed data, thereby obviating the need for a central data repository. A secure private cloud computing framework facilitates interactive, computationally intensive exploration of this geographically distributed, privacy sensitive data. As a proof of concept, MRI brain imaging data hosted at two remote sites were processed in response to a user command at a third site. The system was able to automatically start virtual machines, run a selected processing pipeline and write results to a user accessible database, while keeping data locally stored in the hospitals. Individual tasks took approximately 50% longer compared to a locally hosted blade server but the cloud infrastructure reduced the total elapsed time by a factor of 40 using 70 virtual machines in the cloud. We demonstrated that the combination light-path and private cloud is a viable means of building an analysis infrastructure for secure data analysis. The system requires further work in the areas of error handling, load balancing and secure support of multiple users.

  10. Lewis Structures Technology, 1988. Volume 2: Structural Mechanics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Lewis Structures Div. performs and disseminates results of research conducted in support of aerospace engine structures. These results have a wide range of applicability to practitioners of structural engineering mechanics beyond the aerospace arena. The engineering community was familiarized with the depth and range of research performed by the division and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive evaluation, constitutive models and experimental capabilities, dynamic systems, fatigue and damage, wind turbines, hot section technology (HOST), aeroelasticity, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics, and structural mechanics computer codes.

  11. Computational approaches for discovery of common immunomodulators in fungal infections: towards broad-spectrum immunotherapeutic interventions

    PubMed Central

    2013-01-01

    Background Fungi are the second most abundant type of human pathogens. Invasive fungal pathogens are leading causes of life-threatening infections in clinical settings. Toxicity to the host and drug-resistance are two major deleterious issues associated with existing antifungal agents. Increasing a host’s tolerance and/or immunity to fungal pathogens has potential to alleviate these problems. A host’s tolerance may be improved by modulating the immune system such that it responds more rapidly and robustly in all facets, ranging from the recognition of pathogens to their clearance from the host. An understanding of biological processes and genes that are perturbed during attempted fungal exposure, colonization, and/or invasion will help guide the identification of endogenous immunomodulators and/or small molecules that activate host-immune responses such as specialized adjuvants. Results In this study, we present computational techniques and approaches using publicly available transcriptional data sets, to predict immunomodulators that may act against multiple fungal pathogens. Our study analyzed data sets derived from host cells exposed to five fungal pathogens, namely, Alternaria alternata, Aspergillus fumigatus, Candida albicans, Pneumocystis jirovecii, and Stachybotrys chartarum. We observed statistically significant associations between host responses to A. fumigatus and C. albicans. Our analysis identified biological processes that were consistently perturbed by these two pathogens. These processes contained both immune response-inducing genes such as MALT1, SERPINE1, ICAM1, and IL8, and immune response-repressing genes such as DUSP8, DUSP6, and SPRED2. We hypothesize that these genes belong to a pool of common immunomodulators that can potentially be activated or suppressed (agonized or antagonized) in order to render the host more tolerant to infections caused by A. fumigatus and C. albicans. Conclusions Our computational approaches and methodologies described here can now be applied to newly generated or expanded data sets for further elucidation of additional drug targets. Moreover, identified immunomodulators may be used to generate experimentally testable hypotheses that could help in the discovery of broad-spectrum immunotherapeutic interventions. All of our results are available at the following supplementary website: http://bioinformatics.cs.vt.edu/~murali/supplements/2013-kidane-bmc PMID:24099000

  12. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  13. Review of Enabling Technologies to Facilitate Secure Compute Customization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less

  14. Design of temperature monitoring system based on CAN bus

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    2017-10-01

    The remote temperature monitoring system based on the Controller Area Network (CAN) bus is designed to collect the multi-node remote temperature. By using the STM32F103 as main controller and multiple DS18B20s as temperature sensors, the system achieves a master-slave node data acquisition and transmission based on the CAN bus protocol. And making use of the serial port communication technology to communicate with the host computer, the system achieves the function of remote temperature storage, historical data show and the temperature waveform display.

  15. Antiviral Innate Immunity through the lens of Systems Biology

    PubMed Central

    Tripathi, Shashank; García-Sastre, Adolfo

    2015-01-01

    Cellular innate immunity poses the first hurdle against invading viruses in their attempt to establish infection. This antiviral response is manifested with the detection of viral components by the host cell, followed by transduction of antiviral signals, transcription and translation of antiviral effectors and leads to the establishment of an antiviral state. These events occur in a rather branched and interconnected sequence than a linear path. Traditionally, these processes were studied in the context of a single virus and a host component. However, with the advent of rapid and affordable OMICS technologies it has become feasible to address such questions on a global scale. In the discipline of Systems Biology’, extensive omics datasets are assimilated using computational tools and mathematical models to acquire deeper understanding of complex biological processes. In this review we have catalogued and discussed the application of Systems Biology approaches in dissecting the antiviral innate immune responses. PMID:26657882

  16. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    PubMed Central

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  17. A Community Publication and Dissemination System for Hydrology Education Materials

    NASA Astrophysics Data System (ADS)

    Ruddell, B. L.

    2015-12-01

    Hosted by CUAHSI and the Science Education Resource Center (SERC), federated by the National Science Digital Library (NSDL), and allied with the Water Data Center (WDC), Hydrologic Information System (HIS), and HydroShare projects, a simple cyberinfrastructure has been launched for the publication and dissemination of data and model driven university hydrology education materials. This lightweight system's metadata describes learning content as a data-driven module with defined data inputs and outputs. This structure allows a user to mix and match modules to create sequences of content that teach both hydrology and computer learning outcomes. Importantly, this modular infrastructure allows an instructor to substitute a module based on updated computer methods for one based on outdated computer methods, hopefully solving the problem of rapid obsolescence that has hampered previous community efforts. The prototype system is now available from CUAHSI and SERC, with some example content. The system is designed to catalog, link to, make visible, and make accessible the existing and future contributions of the community; this system does not create content. Submissions from hydrology educators are eagerly solicited, especially for existing content.

  18. Measurements of file transfer rates over dedicated long-haul connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Settlemyer, Bradley W; Imam, Neena

    2016-01-01

    Wide-area file transfers are an integral part of several High-Performance Computing (HPC) scenarios. Dedicated network connections with high capacity, low loss rate and low competing traffic, are increasingly being provisioned over current HPC infrastructures to support such transfers. To gain insights into these file transfers, we collected transfer rate measurements for Lustre and xfs file systems between dedicated multi-core servers over emulated 10 Gbps connections with round trip times (rtt) in 0-366 ms range. Memory transfer throughput over these connections is measured using iperf, and file IO throughput on host systems is measured using xddprof. We consider two file systemmore » configurations: Lustre over IB network and xfs over SSD connected to PCI bus. Files are transferred using xdd across these connections, and the transfer rates are measured, which indicate the need to jointly optimize the connection and host file IO parameters to achieve peak transfer rates. In particular, these measurements indicate that (i) peak file transfer rate is lower than peak connection and host IO throughput, in some cases by as much as 50% or lower, (ii) xdd request sizes that achieve peak throughput for host file IO do not necessarily lead to peak file transfer rates, and (iii) parallelism in host IO and TCP transport does not always improve the file transfer rates.« less

  19. Exponential rise of dynamical complexity in quantum computing through projections.

    PubMed

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-10-10

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once 'observed' as outlined above. Conversely, we show that any complex quantum dynamics can be 'purified' into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics.

  20. Parallel processor for real-time structural control

    NASA Astrophysics Data System (ADS)

    Tise, Bert L.

    1993-07-01

    A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.

  1. Ground standoff mine detection system (GSTAMIDS) engineering, manufacturing, and development (EMD) Block 0

    NASA Astrophysics Data System (ADS)

    Pressley, Jackson R.; Pabst, Donald; Sower, Gary D.; Nee, Larry; Green, Brian; Howard, Peter

    2001-10-01

    The United States Army has contracted EG&G Technical Services to build the GSTAMIDS EMD Block 0. This system autonomously detects and marks buried anti-tank land mines from an unmanned vehicle. It consists of a remotely operated host vehicle, standard teleoperation system (STS) control, mine detection system (MDS) and a control vehicle. Two complete systems are being fabricated, along with a third MDS. The host vehicle for Block 0 is the South African Meerkat that has overpass capability for anti-tank mines, as well as armor anti-mine blast protection and ballistic protection. It is operated via the STS radio link from within the control vehicle. The Main Computer System (MCS), located in the control vehicle, receives sensor data from the MDS via a high speed radio link, processes and fuses the data to make a decision of a mine detection, and sends the information back to the host vehicle for a mark to be placed on the mine location. The MCS also has the capability to interface into the FBCB2 system via SINGARS radio. The GSTAMIDS operator station and the control vehicle communications system also connect to the MCS. The MDS sensors are mounted on the host vehicle and include Ground Penetrating Radar (GPR), Pulsed Magnetic Induction (PMI) metal detector, and (as an option) long-wave infrared (LWIR). A distributed processing architecture is used so that pre-processing is performed on data at the sensor level before transmission to the MCS, minimizing required throughput. Nine (9) channels each of GPR and PMI are mounted underneath the meerkat to provide a three-meter detection swath. Two IR cameras are mounted on the upper sides of the Meerkat, providing a field of view of the required swath with overlap underneath the vehicle. Also included on the host vehicle are an Internal Navigation System (INS), Global Positioning System (GPS), and radio communications for remote control and data transmission. The GSTAMIDS Block 0 is designed as a modular, expandable system with sufficient bandwidth and processing capability for incorporation of additional sensor systems in future Blocks. It is also designed to operate in adverse weather conditions and to be transportable around the world.

  2. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  3. Development of high-availability ATCA/PCIe data acquisition instrumentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Correia, Miguel; Sousa, Jorge; Batista, Antonio J.N.

    2015-07-01

    Latest Fusion energy experiments envision a quasi-continuous operation regime. In consequence, the largest experimental devices, currently in development, specify high-availability (HA) requirements for the whole plant infrastructure. HA features enable the whole facility to perform seamlessly in the case of failure of any of its components, coping with the increasing duration of plasma discharges (steady-state) and assuring safety of equipment, people, environment and investment. IPFN developed a control and data acquisition system, aiming for fast control of advanced Fusion devices, which is thus required to provide such HA features. The system is based on in-house developed Advanced Telecommunication Computing Architecturemore » (ATCA) instrumentation modules - IO blades and data switch blades, establishing a PCIe network on the ATCA shelf's back-plane. The data switch communicates to an external host computer through a PCIe data network. At the hardware management level, the system architecture takes advantage of ATCA native redundancy and hot swap specifications to implement fail-over substitution of IO or data switch blades. A redundant host scheme is also supported by the ATCA/PCIe platform. At the software level, PCIe provides implementation of hot plug services, which translate the hardware changes to the corresponding software/operating system devices. The paper presents how the ATCA and PCIe based system can be setup to perform with the desired degree of HA, thus being suitable for advanced Fusion control and data acquisition systems. (authors)« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allada, Veerendra, Benjegerdes, Troy; Bode, Brett

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as themore » workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.« less

  5. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access these models. In some cases, the CME system also provides alternatives to the SCEC community models. The CME system hosts a collection of community geophysical software codes. These codes include seismic hazard analysis (SHA) programs developed by the SCEC/USGS OpenSHA group. Also, the CME system hosts anelastic wave propagation codes including Kim Olsen's Finite Difference code and Carnegie Mellon's Hercules Finite Element tool chain. The CME system can execute a workflow, that is, a series of geophysical computations using the output of one processing step as the input to a subsequent step. Our workflow capability utilizes grid-based computing software that can submit calculations to a pool of computing resources as well as data management tools that help us maintain an association between data files and metadata descriptions of those files. The CME system maintains, and provides access to, a collection of valuable geophysical data sets. The current CME Digital Library holdings include a collection of 60 ground motion simulation results calculated by a SCEC/PEER working group and a collection of Greens Functions calculated for 33 TriNet broadband receiver sites in the Los Angeles area.

  6. Measurement system for nitrous oxide based on amperometric gas sensor

    NASA Astrophysics Data System (ADS)

    Siswoyo, S.; Persaud, K. C.; Phillips, V. R.; Sneath, R.

    2017-03-01

    It has been well known that nitrous oxide is an important greenhouse gas, so monitoring and control of its concentration and emission is very important. In this work a nitrous oxide measurement system has been developed consisting of an amperometric sensor and an appropriate lab-made potentiostat that capable measuring picoampere current ranges. The sensor was constructed using a gold microelectrode as working electrode surrounded by a silver wire as quasi reference electrode, with tetraethyl ammonium perchlorate and dimethylsulphoxide as supporting electrolyte and solvent respectively. The lab-made potentiostat was built incorporating a transimpedance amplifier capable of picoampere measurements. This also incorporated a microcontroller based data acquisition system, controlled by a host personal computer using a dedicated computer program. The system was capable of detecting N2O concentrations down to 0.07 % v/v.

  7. Host-Nation Operations: Soldier Training on Governance (HOST-G) Training Support Package

    DTIC Science & Technology

    2011-07-01

    restricted this webpage from running scripts or ActiveX controls that could access your computer. Click here for options…” • If this occurs, select that...scripts and ActiveX controls can be useful, but active content might also harm your computer. Are you sure you want to let this file run active

  8. Imagine, Invent, Program, Share: A Library-Hosted Computer Club Promotes 21st Century Skills

    ERIC Educational Resources Information Center

    Myers, Brian

    2009-01-01

    During at least one afternoon each month, Wilmette (Illinois) Public Library (WPL) hosts a local group of computer programmers, designers, and artists, who meet to discuss digital projects and resources, technical challenges, and successful design or programming strategies. WPL's Game Design Club, now in its third year, owes its existence to a…

  9. On the Path to SunShot. Emerging Issues and Challenges in Integrating Solar with the Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Broderick, Robert; Mather, Barry

    2016-05-01

    This report analyzes distribution-integration challenges, solutions, and research needs in the context of distributed generation from PV (DGPV) deployment to date and the much higher levels of deployment expected with achievement of the U.S. Department of Energy's SunShot targets. Recent analyses have improved estimates of the DGPV hosting capacities of distribution systems. This report uses these results to statistically estimate the minimum DGPV hosting capacity for the contiguous United States using traditional inverters of approximately 170 GW without distribution system modifications. This hosting capacity roughly doubles if advanced inverters are used to manage local voltage and additional minor, low-cost changesmore » could further increase these levels substantially. Key to achieving these deployment levels at minimum cost is siting DGPV based on local hosting capacities, suggesting opportunities for regulatory, incentive, and interconnection innovation. Already, pre-computed hosting capacity is beginning to expedite DGPV interconnection requests and installations in select regions; however, realizing SunShot-scale deployment will require further improvements to DGPV interconnection processes, standards and codes, and compensation mechanisms so they embrace the contributions of DGPV to system-wide operations. SunShot-scale DGPV deployment will also require unprecedented coordination of the distribution and transmission systems. This includes harnessing DGPV's ability to relieve congestion and reduce system losses by generating closer to loads; minimizing system operating costs and reserve deployments through improved DGPV visibility; developing communication and control architectures that incorporate DGPV into system operations; providing frequency response, transient stability, and synthesized inertia with DGPV in the event of large-scale system disturbances; and potentially managing reactive power requirements due to large-scale deployment of advanced inverter functions. Finally, additional local and system-level value could be provided by integrating DGPV with energy storage and 'virtual storage,' which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Together, continued innovation across this rich distribution landscape can enable the very-high deployment levels envisioned by SunShot.« less

  10. Host-Microbiome Interaction and Cancer: Potential Application in Precision Medicine

    PubMed Central

    Contreras, Alejandra V.; Cocom-Chan, Benjamin; Hernandez-Montes, Georgina; Portillo-Bobadilla, Tobias; Resendis-Antonio, Osbaldo

    2016-01-01

    It has been experimentally shown that host-microbial interaction plays a major role in shaping the wellness or disease of the human body. Microorganisms coexisting in human tissues provide a variety of benefits that contribute to proper functional activity in the host through the modulation of fundamental processes such as signal transduction, immunity and metabolism. The unbalance of this microbial profile, or dysbiosis, has been correlated with the genesis and evolution of complex diseases such as cancer. Although this latter disease has been thoroughly studied using different high-throughput (HT) technologies, its heterogeneous nature makes its understanding and proper treatment in patients a remaining challenge in clinical settings. Notably, given the outstanding role of host-microbiome interactions, the ecological interactions with microorganisms have become a new significant aspect in the systems that can contribute to the diagnosis and potential treatment of solid cancers. As a part of expanding precision medicine in the area of cancer research, efforts aimed at effective treatments for various kinds of cancer based on the knowledge of genetics, biology of the disease and host-microbiome interactions might improve the prediction of disease risk and implement potential microbiota-directed therapeutics. In this review, we present the state of the art of sequencing and metabolome technologies, computational methods and schemes in systems biology that have addressed recent breakthroughs of uncovering relationships or associations between microorganisms and cancer. Together, microbiome studies extend the horizon of new personalized treatments against cancer from the perspective of precision medicine through a synergistic strategy integrating clinical knowledge, HT data, bioinformatics, and systems biology. PMID:28018236

  11. TOAD Editor

    NASA Technical Reports Server (NTRS)

    Bingle, Bradford D.; Shea, Anne L.; Hofler, Alicia S.

    1993-01-01

    Transferable Output ASCII Data (TOAD) computer program (LAR-13755), implements format designed to facilitate transfer of data across communication networks and dissimilar host computer systems. Any data file conforming to TOAD format standard called TOAD file. TOAD Editor is interactive software tool for manipulating contents of TOAD files. Commonly used to extract filtered subsets of data for visualization of results of computation. Also offers such user-oriented features as on-line help, clear English error messages, startup file, macroinstructions defined by user, command history, user variables, UNDO features, and full complement of mathematical statistical, and conversion functions. Companion program, TOAD Gateway (LAR-14484), converts data files from variety of other file formats to that of TOAD. TOAD Editor written in FORTRAN 77.

  12. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  13. The instrumental principles of MST radars and incoherent scatter radars and the configuration of radar system hardware

    NASA Technical Reports Server (NTRS)

    Roettger, Juergen

    1989-01-01

    The principle of pulse modulation used in the case of coherent scatter radars (MST radars) is discussed. Coherent detection and the corresponding system configuration is delineated. Antenna requirements and design are outlined and the phase-coherent transmitter/receiver system is described. Transmit/receive duplexers, transmitters, receivers, and quadrature detectors are explained. The radar controller, integrator, decoder and correlator design as well as the data transfer and the control and monitoring by the host computer are delineated. Typical operation parameters of some well-known radars are summarized.

  14. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Okuda, Yasukazu; Yoshikawa, Ichirou; Sasano, Fumio

    The authors describe the outline and the construction process of the in-house technical information system of Mitsui Petrochemical Industries Ltd., “MITOLIS”. This system was constructed in 1981 and has been improved since then to make better use of in-house technical reports. Bibliographic data and keywords of technical reports of R & D division are stored in the host computer system in Iwakuni and can be retrieved by the company members on the desk-side terminal connected to the local area network (LAN). The number of stored reports reaches 6100 from 1970 to 1987.

  15. Synthetic hardware performance analysis in virtualized cloud environment for healthcare organization.

    PubMed

    Tan, Chee-Heng; Teh, Ying-Wah

    2013-08-01

    The main obstacles in mass adoption of cloud computing for database operations in healthcare organization are the data security and privacy issues. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to actual data for diagnostic and remediation purposes. The proposed mechanisms utilized the hypothetical data from TPC-H benchmark, to achieve 2 objectives. First, the underlying hardware performance and consistency is monitored via a control system, which is constructed using TPC-H queries. Second, the mechanism to construct stress-testing scenario is envisaged in the host, using a single or combination of TPC-H queries, so that the resource threshold point can be verified, if the virtual machine is still capable of serving critical transactions at this constraining juncture. This threshold point uses server run queue size as input parameter, and it serves 2 purposes: It provides the boundary threshold to the control system, so that periodic learning of the synthetic data sets for performance evaluation does not reach the host's constraint level. Secondly, when the host undergoes hardware change, stress-testing scenarios are simulated in the host by loading up to this resource threshold level, for subsequent response time verification from real and critical transactions.

  16. The engine design engine. A clustered computer platform for the aerodynamic inverse design and analysis of a full engine

    NASA Technical Reports Server (NTRS)

    Sanz, J.; Pischel, K.; Hubler, D.

    1992-01-01

    An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.

  17. Self managing experiment resources

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Ubeda, M.; Tsaregorodtsev, A.; Romanovskiy, V.; Roiser, S.; Charpentier, P.; Graciani, R.

    2014-06-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  18. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  19. A uniform approach for programming distributed heterogeneous computing systems.

    PubMed

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  20. Bringing Federated Identity to Grid Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teheran, Jeny

    The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access formore » users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.« less

  1. Exploiting GPUs in Virtual Machine for BioCloud

    PubMed Central

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465

  2. Exploiting GPUs in virtual machine for BioCloud.

    PubMed

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  3. A Critical Protection Level Derived from Dengue Infection Mathematical Model Considering Asymptomatic and Symptomatic Classes

    NASA Astrophysics Data System (ADS)

    Anggriani, N.; Supriatna, A. K.; Soewono, E.

    2013-04-01

    In this paper we formulate a model of dengue fever transmission by considering the presence of asymptomatic and symptomatic compartments. The model takes the form as a system of differential equations representing a host-vector SIR (Susceptible - Infective -Recovered) disease transmission. It is assumed that both host and vector populations are constant. It is also assumed that reinfection of recovered hosts by the disease is possible due to a wanning immunity in human body. We analyze the model to determine the qualitative behavior of the model solution and use the concept of effective basic reproduction number (fraktur Rp) as a control criteria of the disease transmission. The effect of mosquito biting protection (e.g. by using insect repellent) is also considered. We compute the long-term ratio of the asymptomatic and symptomatic classes and show a condition for which the iceberg phenomenon could appear.

  4. Inter-kingdom prediction certainty evaluation of protein subcellular localization tools: microbial pathogenesis approach for deciphering host microbe interaction.

    PubMed

    Khan, Abdul Arif; Khan, Zakir; Kalam, Mohd Abul; Khan, Azmat Ali

    2018-01-01

    Microbial pathogenesis involves several aspects of host-pathogen interactions, including microbial proteins targeting host subcellular compartments and subsequent effects on host physiology. Such studies are supported by experimental data, but recent detection of bacterial proteins localization through computational eukaryotic subcellular protein targeting prediction tools has also come into practice. We evaluated inter-kingdom prediction certainty of these tools. The bacterial proteins experimentally known to target host subcellular compartments were predicted with eukaryotic subcellular targeting prediction tools, and prediction certainty was assessed. The results indicate that these tools alone are not sufficient for inter-kingdom protein targeting prediction. The correct prediction of pathogen's protein subcellular targeting depends on several factors, including presence of localization signal, transmembrane domain and molecular weight, etc., in addition to approach for subcellular targeting prediction. The detection of protein targeting in endomembrane system is comparatively difficult, as the proteins in this location are channelized to different compartments. In addition, the high specificity of training data set also creates low inter-kingdom prediction accuracy. Current data can help to suggest strategy for correct prediction of bacterial protein's subcellular localization in host cell. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Meeting report from the first meetings of the Computational Modeling in Biology Network (COMBINE)

    PubMed Central

    Le Novère, Nicolas; Hucka, Michael; Anwar, Nadia; Bader, Gary D; Demir, Emek; Moodie, Stuart; Sorokin, Anatoly

    2011-01-01

    The Computational Modeling in Biology Network (COMBINE), is an initiative to coordinate the development of the various community standards and formats in computational systems biology and related fields. This report summarizes the activities pursued at the first annual COMBINE meeting held in Edinburgh on October 6-9 2010 and the first HARMONY hackathon, held in New York on April 18-22 2011. The first of those meetings hosted 81 attendees. Discussions covered both official COMBINE standards-(BioPAX, SBGN and SBML), as well as emerging efforts and interoperability between different formats. The second meeting, oriented towards software developers, welcomed 59 participants and witnessed many technical discussions, development of improved standards support in community software systems and conversion between the standards. Both meetings were resounding successes and showed that the field is now mature enough to develop representation formats and related standards in a coordinated manner. PMID:22180826

  6. Meeting report from the first meetings of the Computational Modeling in Biology Network (COMBINE).

    PubMed

    Le Novère, Nicolas; Hucka, Michael; Anwar, Nadia; Bader, Gary D; Demir, Emek; Moodie, Stuart; Sorokin, Anatoly

    2011-11-30

    The Computational Modeling in Biology Network (COMBINE), is an initiative to coordinate the development of the various community standards and formats in computational systems biology and related fields. This report summarizes the activities pursued at the first annual COMBINE meeting held in Edinburgh on October 6-9 2010 and the first HARMONY hackathon, held in New York on April 18-22 2011. The first of those meetings hosted 81 attendees. Discussions covered both official COMBINE standards-(BioPAX, SBGN and SBML), as well as emerging efforts and interoperability between different formats. The second meeting, oriented towards software developers, welcomed 59 participants and witnessed many technical discussions, development of improved standards support in community software systems and conversion between the standards. Both meetings were resounding successes and showed that the field is now mature enough to develop representation formats and related standards in a coordinated manner.

  7. Integration of a sensor based multiple robot environment for space applications: The Johnson Space Center Teleoperator Branch Robotics Laboratory

    NASA Technical Reports Server (NTRS)

    Hwang, James; Campbell, Perry; Ross, Mike; Price, Charles R.; Barron, Don

    1989-01-01

    An integrated operating environment was designed to incorporate three general purpose robots, sensors, and end effectors, including Force/Torque Sensors, Tactile Array sensors, Tactile force sensors, and Force-sensing grippers. The design and implementation of: (1) the teleoperation of a general purpose PUMA robot; (2) an integrated sensor hardware/software system; (3) the force-sensing gripper control; (4) the host computer system for dual Robotic Research arms; and (5) the Ethernet integration are described.

  8. Advances in Mechanisms Supporting Data Collection on Future Force Networks: Product Manager C4ISR On-the-Move

    DTIC Science & Technology

    2008-12-01

    for Layer 3 data capture: NetPoll ncap tget Monitor session Radio System switch router User App interface box GPS This model applies to most fixed...developed a lightweight, custom implementation, termed ncap . As described in Section 3.1, the Ground Truth System provides a linkage between host...computer CPU time and GPS time, and ncap leverages this to perform highly precise (əmsec) time tagging of offered and received packets. Such

  9. Availability of software services for a hospital information system.

    PubMed

    Sakamoto, N

    1998-03-01

    Hospital information systems (HISs) are becoming more important and covering more parts in daily hospital operations as order-entry systems become popular and electronic charts are introduced. Thus, HISs today need to be able to provide necessary services for hospital operations for a 24-h day, 365 days a year. The provision of services discussed here does not simply mean the availability of computers, in which all that matters is that the computer is functioning. It means the provision of necessary information for hospital operations by the computer software, and we will call it the availability of software services. HISs these days are mostly client-server systems. To increase availability of software services in these systems, it is not enough to just use system structures that are highly reliable in existing host-centred systems. Four main components which support availability of software services are network systems, client computers, server computers, and application software. In this paper, we suggest how to structure these four components to provide the minimum requested software services even if a part of the system stops to function. The network system should be double-protected in stratus using Asynchronous Transfer Mode (ATM) as its base network. Client computers should be fat clients with as much application logic as possible, and reference information which do not require frequent updates (master files, for example) should be replicated in clients. It would be best if all server computers could be double-protected. However, if that is physically impossible, one database file should be made accessible by several server computers. Still, at least the basic patients' information and the latest clinical records should be double-protected physically. Application software should be tested carefully before introduction. Different versions of the application software should always be kept and managed in case the new version has problems. If a hospital information system is designed and developed with these points in mind, it's availability of software services should increase greatly.

  10. Composition and Realization of Source-to-Sink High-Performance Flows: File Systems, Storage, Hosts, LAN and WAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. To support such capabilities, significant progress has been made in various components including the deployment of 100 Gbps networks with future 1 Tbps bandwidth, increases in end-host capabilities with multiple cores and buses, capacity improvements in large disk arrays, and deployment of parallel file systems such as Lustre and GPFS. High-performance source-to-sink datamore » flows must be composed of these component systems, which requires significant optimizations of the storage-to-host data and execution paths to match the edge and long-haul network connections. In particular, end systems are currently supported by 10-40 Gbps Network Interface Cards (NIC) and 8-32 Gbps storage Host Channel Adapters (HCAs), which carry the individual flows that collectively must reach network speeds of 100 Gbps and higher. Indeed, such data flows must be synthesized using multicore, multibus hosts connected to high-performance storage systems on one side and to the network on the other side. Current experimental results show that the constituent flows must be optimally composed and preserved from storage systems, across the hosts and the networks with minimal interference. Furthermore, such a capability must be made available transparently to the science users without placing undue demands on them to account for the details of underlying systems and networks. And, this task is expected to become even more complex in the future due to the increasing sophistication of hosts, storage systems, and networks that constitute the high-performance flows. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to achieve source-to-sink high-performance flows, and (2) develop tools that provide these capabilities through simple interfaces to users and applications. In terms of the former, we propose to develop (1) optimization methods that align and transition multiple storage flows to multiple network flows on multicore, multibus hosts; and (2) edge and long-haul network path realization and maintenance using advanced provisioning methods including OSCARS and OpenFlow. We also propose synthesis methods that combine these individual technologies to compose high-performance flows using a collection of constituent storage-network flows, and realize them across the storage and local network connections as well as long-haul connections. We propose to develop automated user tools that profile the hosts, storage systems, and network connections; compose the source-to-sink complex flows; and set up and maintain the needed network connections. These solutions will be tested using (1) 100 Gbps connection(s) between Oak Ridge National Laboratory (ORNL) and Argonne National Laboratory (ANL) with storage systems supported by Lustre and GPFS file systems with an asymmetric connection to University of Memphis (UM); (2) ORNL testbed with multicore and multibus hosts, switches with OpenFlow capabilities, and network emulators; and (3) 100 Gbps connections from ESnet and their Openflow testbed, and other experimental connections. This proposal brings together the expertise and facilities of the two national laboratories, ORNL and ANL, and UM. It also represents a collaboration between DOE and the Department of Defense (DOD) projects at ORNL by sharing technical expertise and personnel costs, and leveraging the existing DOD Extreme Scale Systems Center (ESSC) facilities at ORNL.« less

  11. Calculation of Host-Guest Binding Affinities Using a Quantum-Mechanical Energy Model.

    PubMed

    Muddana, Hari S; Gilson, Michael K

    2012-06-12

    The prediction of protein-ligand binding affinities is of central interest in computer-aided drug discovery, but it is still difficult to achieve a high degree of accuracy. Recent studies suggesting that available force fields may be a key source of error motivate the present study, which reports the first mining minima (M2) binding affinity calculations based on a quantum mechanical energy model, rather than an empirical force field. We apply a semi-empirical quantum-mechanical energy function, PM6-DH+, coupled with the COSMO solvation model, to 29 host-guest systems with a wide range of measured binding affinities. After correction for a systematic error, which appears to derive from the treatment of polar solvation, the computed absolute binding affinities agree well with experimental measurements, with a mean error 1.6 kcal/mol and a correlation coefficient of 0.91. These calculations also delineate the contributions of various energy components, including solute energy, configurational entropy, and solvation free energy, to the binding free energies of these host-guest complexes. Comparison with our previous calculations, which used empirical force fields, point to significant differences in both the energetic and entropic components of the binding free energy. The present study demonstrates successful combination of a quantum mechanical Hamiltonian with the M2 affinity method.

  12. A De Novo-Assembly Based Data Analysis Pipeline for Plant Obligate Parasite Metatranscriptomic Studies.

    PubMed

    Guo, Li; Allen, Kelly S; Deiulio, Greg; Zhang, Yong; Madeiras, Angela M; Wick, Robert L; Ma, Li-Jun

    2016-01-01

    Current and emerging plant diseases caused by obligate parasitic microbes such as rusts, downy mildews, and powdery mildews threaten worldwide crop production and food safety. These obligate parasites are typically unculturable in the laboratory, posing technical challenges to characterize them at the genetic and genomic level. Here we have developed a data analysis pipeline integrating several bioinformatic software programs. This pipeline facilitates rapid gene discovery and expression analysis of a plant host and its obligate parasite simultaneously by next generation sequencing of mixed host and pathogen RNA (i.e., metatranscriptomics). We applied this pipeline to metatranscriptomic sequencing data of sweet basil (Ocimum basilicum) and its obligate downy mildew parasite Peronospora belbahrii, both lacking a sequenced genome. Even with a single data point, we were able to identify both candidate host defense genes and pathogen virulence genes that are highly expressed during infection. This demonstrates the power of this pipeline for identifying genes important in host-pathogen interactions without prior genomic information for either the plant host or the obligate biotrophic pathogen. The simplicity of this pipeline makes it accessible to researchers with limited computational skills and applicable to metatranscriptomic data analysis in a wide range of plant-obligate-parasite systems.

  13. The Transportable Applications Environment - An interactive design-to-production development system

    NASA Technical Reports Server (NTRS)

    Perkins, Dorothy C.; Howell, David R.; Szczur, Martha R.

    1988-01-01

    An account is given of the design philosophy and architecture of the Transportable Applications Environment (TAE), an executive program binding a system of applications programs into a single, easily operable whole. TAE simplifies the job of a system developer by furnishing a stable framework for system-building; it also integrates system activities, and cooperates with the host operating system in order to perform such functions as task-scheduling and I/O. The initial TAE human/computer interface supported command and menu interfaces, data displays, parameter-prompting, error-reporting, and online help. Recent extensions support graphics workstations with a window-based, modeless user interface.

  14. Modeling of Wildlife-Associated Zoonoses: Applications and Caveats

    PubMed Central

    Lewis, Bryan L.; Marathe, Madhav; Eubank, Stephen; Blackburn, Jason K.

    2012-01-01

    Abstract Wildlife species are identified as an important source of emerging zoonotic disease. Accordingly, public health programs have attempted to expand in scope to include a greater focus on wildlife and its role in zoonotic disease outbreaks. Zoonotic disease transmission dynamics involving wildlife are complex and nonlinear, presenting a number of challenges. First, empirical characterization of wildlife host species and pathogen systems are often lacking, and insight into one system may have little application to another involving the same host species and pathogen. Pathogen transmission characterization is difficult due to the changing nature of population size and density associated with wildlife hosts. Infectious disease itself may influence wildlife population demographics through compensatory responses that may evolve, such as decreased age to reproduction. Furthermore, wildlife reservoir dynamics can be complex, involving various host species and populations that may vary in their contribution to pathogen transmission and persistence over space and time. Mathematical models can provide an important tool to engage these complex systems, and there is an urgent need for increased computational focus on the coupled dynamics that underlie pathogen spillover at the human–wildlife interface. Often, however, scientists conducting empirical studies on emerging zoonotic disease do not have the necessary skill base to choose, develop, and apply models to evaluate these complex systems. How do modeling frameworks differ and what considerations are important when applying modeling tools to the study of zoonotic disease? Using zoonotic disease examples, we provide an overview of several common approaches and general considerations important in the modeling of wildlife-associated zoonoses. PMID:23199265

  15. Host computer software specifications for a zero-g payload manhandling simulator

    NASA Technical Reports Server (NTRS)

    Wilson, S. W.

    1986-01-01

    The HP PASCAL source code was developed for the Mission Planning and Analysis Division (MPAD) of NASA/JSC, and takes the place of detailed flow charts defining the host computer software specifications for MANHANDLE, a digital/graphical simulator that can be used to analyze the dynamics of onorbit (zero-g) payload manhandling operations. Input and output data for representative test cases are contained.

  16. Three-dimensional laser microvision.

    PubMed

    Shimotahira, H; Iizuka, K; Chu, S C; Wah, C; Costen, F; Yoshikuni, Y

    2001-04-10

    A three-dimensional (3-D) optical imaging system offering high resolution in all three dimensions, requiring minimum manipulation and capable of real-time operation, is presented. The system derives its capabilities from use of the superstructure grating laser source in the implementation of a laser step frequency radar for depth information acquisition. A synthetic aperture radar technique was also used to further enhance its lateral resolution as well as extend the depth of focus. High-speed operation was made possible by a dual computer system consisting of a host and a remote microcomputer supported by a dual-channel Small Computer System Interface parallel data transfer system. The system is capable of operating near real time. The 3-D display of a tunneling diode, a microwave integrated circuit, and a see-through image taken by the system operating near real time are included. The depth resolution is 40 mum; lateral resolution with a synthetic aperture approach is a fraction of a micrometer and that without it is approximately 10 mum.

  17. Joint Force Quarterly. Number 5, Summer 1994

    DTIC Science & Technology

    1994-07-01

    terms of a matrix and have set it up to achieve things that matrix organizations facilitate. Matrices compel interaction across organizations; they...provide more joint, synergistic solutions to military problems. One primary result of this interaction between the assess- ment process and JROC is the...the Contingency Tactical Air Control Auto- mated Planning System (CTAPS) are both single-host computer sys- tems that do not support interactive data

  18. A Tralinet Guide to the Internet

    DTIC Science & Technology

    1994-02-01

    Internet surfers often get stuck. The Internet is new, evolving, constantly changing, designed by computer system people rather than information retrieval...Librarians are already getting questions daily about Internet -accessible resources: providing information for their customers is about to get far...InterNIC. These are the best tools we have for looking up information about various Internet hosts. The DDN Network Information Center (NIC) is best for

  19. Operational Based Vision Assessment

    DTIC Science & Technology

    2014-02-01

    formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation or convey any...expensive than other developers’ software. The sources for the GPUs ( Nvidia ) and the host computer (Concurrent’s iHawk) were identified. The...boundaries, which is a distracting artifact when performing visual tests. The problem has been isolated by the OBVA team to the Nvidia GPUs. The OBVA system

  20. Monitoring Thermal Conditions in Footwear

    NASA Astrophysics Data System (ADS)

    Silva-Moreno, Alejandra. A.; Lopez Vela, Martín; Alcalá Ochoa, Noe

    2006-09-01

    Thermal conditions inside the foot were evaluated on a volunteer subject. We have designed and constructed an electronic system which can monitors temperature and humidity of the foot inside the shoe. The data is stored in a battery-powered device for later uploading to a host computer for data analysis. The apparatus potentially can be used to provide feedback to patients who are prone to having skin breakdowns.

  1. Artificial Intelligence Applications to Testability.

    DTIC Science & Technology

    1984-10-01

    general software assistant; examining testability utilization of it should wait a few years until the software assistant is a well defined product ...ago. It provides a single host which satisfies the needs of developers, product developers, and end users . As shown in table 5.10-2, it also provides...follows a trend towards more user -oriented design approaches to interactive computer systems. The implicit goal in this trend is the

  2. Computer modelling of BaY2F8: defect structure, rare earth doping and optical behaviour

    NASA Astrophysics Data System (ADS)

    Amaral, J. B.; Couto Dos Santos, M. A.; Valerio, M. E. G.; Jackson, R. A.

    2005-10-01

    BaY2F8, when doped with rare earth elements, is a material of interest in the development of solid-state laser systems, especially for use in the infrared region. This paper presents the application of a computational technique, which combines atomistic modelling and crystal field calculations, in a study of rare earth doping of the material. Atomistic modelling is used to calculate the intrinsic defect structure and the symmetry and detailed geometry of the dopant ion-host lattice system, and this information is then used to calculate the crystal field parameters, which are an important indicator in assessing the optical behaviour of the dopant-crystal system. Energy levels are then calculated for the Dy3+-substituted material, and comparisons with the results of recent experimental work are made.

  3. Parallel processor for real-time structural control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tise, B.L.

    1992-01-01

    A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less

  4. Perspectives on the role of mobility, behavior, and time scales in the spread of diseases.

    PubMed

    Castillo-Chavez, Carlos; Bichara, Derdei; Morin, Benjamin R

    2016-12-20

    The dynamics, control, and evolution of communicable and vector-borne diseases are intimately connected to the joint dynamics of epidemiological, behavioral, and mobility processes that operate across multiple spatial, temporal, and organizational scales. The identification of a theoretical explanatory framework that accounts for the pattern regularity exhibited by a large number of host-parasite systems, including those sustained by host-vector epidemiological dynamics, is but one of the challenges facing the coevolving fields of computational, evolutionary, and theoretical epidemiology. Host-parasite epidemiological patterns, including epidemic outbreaks and endemic recurrent dynamics, are characteristic to well-identified regions of the world; the result of processes and constraints such as strain competition, host and vector mobility, and population structure operating over multiple scales in response to recurrent disturbances (like El Niño) and climatological and environmental perturbations over thousands of years. It is therefore important to identify and quantify the processes responsible for observed epidemiological macroscopic patterns: the result of individual interactions in changing social and ecological landscapes. In this perspective, we touch on some of the issues calling for the identification of an encompassing theoretical explanatory framework by identifying some of the limitations of existing theory, in the context of particular epidemiological systems. Fostering the reenergizing of research that aims at disentangling the role of epidemiological and socioeconomic forces on disease dynamics, better understood as complex adaptive systems, is a key aim of this perspective.

  5. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology

    PubMed Central

    Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804

  6. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology.

    PubMed

    Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.

  7. Design and real-time control of a robotic system for fracture manipulation.

    PubMed

    Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S

    2015-08-01

    This paper presents the design, development and control of a new robotic system for fracture manipulation. The objective is to improve the precision, ergonomics and safety of the traditional surgical procedure to treat joint fractures. The achievements toward this direction are here reported and include the design, the real-time control architecture and the evaluation of a new robotic manipulator system. The robotic manipulator is a 6-DOF parallel robot with the struts developed as linear actuators. The control architecture is also described here. The high-level controller implements a host-target structure composed by a host computer (PC), a real-time controller, and an FPGA. A graphical user interface was designed allowing the surgeon to comfortably automate and monitor the robotic system. The real-time controller guarantees the determinism of the control algorithms adding an extra level of safety for the robotic automation. The system's positioning accuracy and repeatability have been demonstrated showing a maximum positioning RMSE of 1.18 ± 1.14mm (translations) and 1.85 ± 1.54° (rotations).

  8. A web portal for hydrodynamical, cosmological simulations

    NASA Astrophysics Data System (ADS)

    Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.

    2017-07-01

    This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.

  9. A comprehensive map of the influenza A virus replication cycle

    PubMed Central

    2013-01-01

    Background Influenza is a common infectious disease caused by influenza viruses. Annual epidemics cause severe illnesses, deaths, and economic loss around the world. To better defend against influenza viral infection, it is essential to understand its mechanisms and associated host responses. Many studies have been conducted to elucidate these mechanisms, however, the overall picture remains incompletely understood. A systematic understanding of influenza viral infection in host cells is needed to facilitate the identification of influential host response mechanisms and potential drug targets. Description We constructed a comprehensive map of the influenza A virus (‘IAV’) life cycle (‘FluMap’) by undertaking a literature-based, manual curation approach. Based on information obtained from publicly available pathway databases, updated with literature-based information and input from expert virologists and immunologists, FluMap is currently composed of 960 factors (i.e., proteins, mRNAs etc.) and 456 reactions, and is annotated with ~500 papers and curation comments. In addition to detailing the type of molecular interactions, isolate/strain specific data are also available. The FluMap was built with the pathway editor CellDesigner in standard SBML (Systems Biology Markup Language) format and visualized as an SBGN (Systems Biology Graphical Notation) diagram. It is also available as a web service (online map) based on the iPathways+ system to enable community discussion by influenza researchers. We also demonstrate computational network analyses to identify targets using the FluMap. Conclusion The FluMap is a comprehensive pathway map that can serve as a graphically presented knowledge-base and as a platform to analyze functional interactions between IAV and host factors. Publicly available webtools will allow continuous updating to ensure the most reliable representation of the host-virus interaction network. The FluMap is available at http://www.influenza-x.org/flumap/. PMID:24088197

  10. Remote canopy hemispherical image collection system

    NASA Astrophysics Data System (ADS)

    Wan, Xuefen; Liu, Bingyu; Yang, Yi; Han, Fang; Cui, Jian

    2016-11-01

    Canopies are major part of plant photosynthesis and have distinct architectural elements such as tree crowns, whorls, branches, shoots, etc. By measuring canopy structural parameters, the solar radiation interception, photosynthesis effects and the spatio-temporal distribution of solar radiation under the canopy can be evaluated. Among canopy structure parameters, Leaf Area Index (LAI) is the key one. Leaf area index is a crucial variable in agronomic and environmental studies, because of its importance for estimating the amount of radiation intercepted by the canopy and the crop water requirements. The LAI can be achieved by hemispheric images which are obtained below the canopy with high accuracy and effectiveness. But existing hemispheric images canopy-LAI measurement technique is based on digital SLR camera with a fisheye lens. Users need to collect hemispheric image manually. The SLR camera with fisheye lens is not suit for long-term canopy-LAI outdoor measurement too. And the high cost of SLR limits its capacity. In recent years, with the development of embedded system and image processing technology, low cost remote canopy hemispheric image acquisition technology is becoming possible. In this paper, we present a remote hemispheric canopy image acquisition system with in-field/host configuration. In-field node based on imbed platform, low cost image sensor and fisheye lens is designed to achieve hemispherical image of plant canopy at distance with low cost. Solar radiation and temperature/humidity data, which are important for evaluating image data validation, are obtained for invalid hemispherical image elimination and node maintenance too. Host computer interacts with in-field node by 3G network. The hemispherical image calibration and super resolution are used to improve image quality in host computer. Results show that the remote canopy image collection system can make low cost remote canopy image acquisition for LAI effectively. It will be a potential technology candidate for low-cost remote canopy hemispherical image collection to measure canopy LAI.

  11. ChimericSeq: An open-source, user-friendly interface for analyzing NGS data to identify and characterize viral-host chimeric sequences.

    PubMed

    Shieh, Fwu-Shan; Jongeneel, Patrick; Steffen, Jamin D; Lin, Selena; Jain, Surbhi; Song, Wei; Su, Ying-Hsiu

    2017-01-01

    Identification of viral integration sites has been important in understanding the pathogenesis and progression of diseases associated with particular viral infections. The advent of next-generation sequencing (NGS) has enabled researchers to understand the impact that viral integration has on the host, such as tumorigenesis. Current computational methods to analyze NGS data of virus-host junction sites have been limited in terms of their accessibility to a broad user base. In this study, we developed a software application (named ChimericSeq), that is the first program of its kind to offer a graphical user interface, compatibility with both Windows and Mac operating systems, and optimized for effectively identifying and annotating virus-host chimeric reads within NGS data. In addition, ChimericSeq's pipeline implements custom filtering to remove artifacts and detect reads with quantitative analytical reporting to provide functional significance to discovered integration sites. The improved accessibility of ChimericSeq through a GUI interface in both Windows and Mac has potential to expand NGS analytical support to a broader spectrum of the scientific community.

  12. ChimericSeq: An open-source, user-friendly interface for analyzing NGS data to identify and characterize viral-host chimeric sequences

    PubMed Central

    Shieh, Fwu-Shan; Jongeneel, Patrick; Steffen, Jamin D.; Lin, Selena; Jain, Surbhi; Song, Wei

    2017-01-01

    Identification of viral integration sites has been important in understanding the pathogenesis and progression of diseases associated with particular viral infections. The advent of next-generation sequencing (NGS) has enabled researchers to understand the impact that viral integration has on the host, such as tumorigenesis. Current computational methods to analyze NGS data of virus-host junction sites have been limited in terms of their accessibility to a broad user base. In this study, we developed a software application (named ChimericSeq), that is the first program of its kind to offer a graphical user interface, compatibility with both Windows and Mac operating systems, and optimized for effectively identifying and annotating virus-host chimeric reads within NGS data. In addition, ChimericSeq’s pipeline implements custom filtering to remove artifacts and detect reads with quantitative analytical reporting to provide functional significance to discovered integration sites. The improved accessibility of ChimericSeq through a GUI interface in both Windows and Mac has potential to expand NGS analytical support to a broader spectrum of the scientific community. PMID:28829778

  13. Application of digital interferogram evaluation techniques to the measurement of 3-D flow fields

    NASA Technical Reports Server (NTRS)

    Becker, Friedhelm; Yu, Yung H.

    1987-01-01

    A system for digitally evaluating interferograms, based on an image processing system connected to a host computer, was implemented. The system supports one- and two-dimensional interferogram evaluations. Interferograms are digitized, enhanced, and then segmented. The fringe coordinates are extracted, and the fringes are represented as polygonal data structures. Fringe numbering and fringe interpolation modules are implemented. The system supports editing and interactive features, as well as graphic visualization. An application of the system to the evaluation of double exposure interferograms from the transonic flow field around a helicopter blade and the reconstruction of the three dimensional flow field is given.

  14. Horizontal Directional Drilling-Length Detection Technology While Drilling Based on Bi-Electro-Magnetic Sensing.

    PubMed

    Wang, Yudan; Wen, Guojun; Chen, Han

    2017-04-27

    The drilling length is an important parameter in the process of horizontal directional drilling (HDD) exploration and recovery, but there has been a lack of accurate, automatically obtained statistics regarding this parameter. Herein, a technique for real-time HDD length detection and a management system based on the electromagnetic detection method with a microprocessor and two magnetoresistive sensors employing the software LabVIEW are proposed. The basic principle is to detect the change in the magnetic-field strength near a current coil while the drill stem and drill-stem joint successively pass through the current coil forward or backward. The detection system consists of a hardware subsystem and a software subsystem. The hardware subsystem employs a single-chip microprocessor as the main controller. A current coil is installed in front of the clamping unit, and two magneto resistive sensors are installed on the sides of the coil symmetrically and perpendicular to the direction of movement of the drill pipe. Their responses are used to judge whether the drill-stem joint is passing through the clamping unit; then, the order of their responses is used to judge the movement direction. The software subsystem is composed of a visual software running on the host computer and a software running in the slave microprocessor. The host-computer software processes, displays, and saves the drilling-length data, whereas the slave microprocessor software operates the hardware system. A combined test demonstrated the feasibility of the entire drilling-length detection system.

  15. Horizontal Directional Drilling-Length Detection Technology While Drilling Based on Bi-Electro-Magnetic Sensing

    PubMed Central

    Wang, Yudan; Wen, Guojun; Chen, Han

    2017-01-01

    The drilling length is an important parameter in the process of horizontal directional drilling (HDD) exploration and recovery, but there has been a lack of accurate, automatically obtained statistics regarding this parameter. Herein, a technique for real-time HDD length detection and a management system based on the electromagnetic detection method with a microprocessor and two magnetoresistive sensors employing the software LabVIEW are proposed. The basic principle is to detect the change in the magnetic-field strength near a current coil while the drill stem and drill-stem joint successively pass through the current coil forward or backward. The detection system consists of a hardware subsystem and a software subsystem. The hardware subsystem employs a single-chip microprocessor as the main controller. A current coil is installed in front of the clamping unit, and two magneto resistive sensors are installed on the sides of the coil symmetrically and perpendicular to the direction of movement of the drill pipe. Their responses are used to judge whether the drill-stem joint is passing through the clamping unit; then, the order of their responses is used to judge the movement direction. The software subsystem is composed of a visual software running on the host computer and a software running in the slave microprocessor. The host-computer software processes, displays, and saves the drilling-length data, whereas the slave microprocessor software operates the hardware system. A combined test demonstrated the feasibility of the entire drilling-length detection system. PMID:28448445

  16. An Efficient Two-Tier Causal Protocol for Mobile Distributed Systems

    PubMed Central

    Dominguez, Eduardo Lopez; Pomares Hernandez, Saul E.; Gomez, Gustavo Rodriguez; Medina, Maria Auxilio

    2013-01-01

    Causal ordering is a useful tool for mobile distributed systems (MDS) to reduce the non-determinism induced by three main aspects: host mobility, asynchronous execution, and unpredictable communication delays. Several causal protocols for MDS exist. Most of them, in order to reduce the overhead and the computational cost over wireless channels and mobile hosts (MH), ensure causal ordering at and according to the causal view of the Base Stations. Nevertheless, these protocols introduce certain disadvantage, such as unnecessary inhibition at the delivery of messages. In this paper, we present an efficient causal protocol for groupware that satisfies the MDS's constraints, avoiding unnecessary inhibitions and ensuring the causal delivery based on the view of the MHs. One interesting aspect of our protocol is that it dynamically adapts the causal information attached to each message based on the number of messages with immediate dependency relation, and this is not directly proportional to the number of MHs. PMID:23585828

  17. Microprocessor control and networking for the amps breadboard

    NASA Technical Reports Server (NTRS)

    Floyd, Stephen A.

    1987-01-01

    Future space missions will require more sophisticated power systems, implying higher costs and more extensive crew and ground support involvement. To decrease this human involvement, as well as to protect and most efficiently utilize this important resource, NASA has undertaken major efforts to promote progress in the design and development of autonomously managed power systems. Two areas being actively pursued are autonomous power system (APS) breadboards and knowledge-based expert system (KBES) applications. The former are viewed as a requirement for the timely development of the latter. Not only will they serve as final testbeds for the various KBES applications, but will play a major role in the knowledge engineering phase of their development. The current power system breadboard designs are of a distributed microprocessor nature. The distributed nature, plus the need to connect various external computer capabilities (i.e., conventional host computers and symbolic processors), places major emphasis on effective networking. The communications and networking technologies for the first power system breadboard/test facility are described.

  18. Robotic laboratory for distance education

    NASA Astrophysics Data System (ADS)

    Luciano, Sarah C.; Kost, Alan R.

    2016-09-01

    This project involves the construction of a remote-controlled laboratory experiment that can be accessed by online students. The project addresses a need to provide a laboratory experience for students who are taking online courses to be able to provide an in-class experience. The chosen task for the remote user is an optical engineering experiment, specifically aligning a spatial filter. We instrument the physical laboratory set up in Tucson, AZ at the University of Arizona. The hardware in the spatial filter experiment is augmented by motors and cameras to allow the user to remotely control the hardware. The user interacts with a software on their computer, which communicates with a server via Internet connection to the host computer in the Optics Laboratory at the University of Arizona. Our final overall system is comprised of several subsystems. These are the optical experiment set-up, which is a spatial filter experiment; the mechanical subsystem, which interfaces the motors with the micrometers to move the optical hardware; the electrical subsystem, which allows for the electrical communications from the remote computer to the host computer to the hardware; and finally the software subsystem, which is the means by which messages are communicated throughout the system. The goal of the project is to convey as much of an in-lab experience as possible by allowing the user to directly manipulate hardware and receive visual feedback in real-time. Thus, the remote user is able to learn important concepts from this particular experiment and is able to connect theory to the physical world by actually seeing the outcome of a procedure. The latter is a learning experience that is often lost with distance learning and is one that this project hopes to provide.

  19. The Ethics of Cloud Computing.

    PubMed

    de Bruin, Boudewijn; Floridi, Luciano

    2017-02-01

    Cloud computing is rapidly gaining traction in business. It offers businesses online services on demand (such as Gmail, iCloud and Salesforce) and allows them to cut costs on hardware and IT support. This is the first paper in business ethics dealing with this new technology. It analyzes the informational duties of hosting companies that own and operate cloud computing datacentres (e.g., Amazon). It considers the cloud services providers leasing 'space in the cloud' from hosting companies (e.g., Dropbox, Salesforce). And it examines the business and private 'clouders' using these services. The first part of the paper argues that hosting companies, services providers and clouders have mutual informational (epistemic) obligations to provide and seek information about relevant issues such as consumer privacy, reliability of services, data mining and data ownership. The concept of interlucency is developed as an epistemic virtue governing ethically effective communication. The second part considers potential forms of government restrictions on or proscriptions against the development and use of cloud computing technology. Referring to the concept of technology neutrality, it argues that interference with hosting companies and cloud services providers is hardly ever necessary or justified. It is argued, too, however, that businesses using cloud services (e.g., banks, law firms, hospitals etc. storing client data in the cloud) will have to follow rather more stringent regulations.

  20. Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1990-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.

  1. Shipping Science Worldwide with Open Source Containers

    NASA Astrophysics Data System (ADS)

    Molineaux, J. P.; McLaughlin, B. D.; Pilone, D.; Plofchan, P. G.; Murphy, K. J.

    2014-12-01

    Scientific applications often present difficult web-hosting needs. Their compute- and data-intensive nature, as well as an increasing need for high-availability and distribution, combine to create a challenging set of hosting requirements. In the past year, advancements in container-based virtualization and related tooling have offered new lightweight and flexible ways to accommodate diverse applications with all the isolation and portability benefits of traditional virtualization. This session will introduce and demonstrate an open-source, single-interface, Platform-as-a-Serivce (PaaS) that empowers application developers to seamlessly leverage geographically distributed, public and private compute resources to achieve highly-available, performant hosting for scientific applications.

  2. Testing electronic structure methods for describing intermolecular H...H interactions in supramolecular chemistry.

    PubMed

    Casadesús, Ricard; Moreno, Miquel; González-Lafont, Angels; Lluch, José M; Repasky, Matthew P

    2004-01-15

    In this article a wide variety of computational approaches (molecular mechanics force fields, semiempirical formalisms, and hybrid methods, namely ONIOM calculations) have been used to calculate the energy and geometry of the supramolecular system 2-(2'-hydroxyphenyl)-4-methyloxazole (HPMO) encapsulated in beta-cyclodextrin (beta-CD). The main objective of the present study has been to examine the performance of these computational methods when describing the short range H. H intermolecular interactions between guest (HPMO) and host (beta-CD) molecules. The analyzed molecular mechanics methods do not provide unphysical short H...H contacts, but it is obvious that their applicability to the study of supramolecular systems is rather limited. For the semiempirical methods, MNDO is found to generate more reliable geometries than AM1, PM3 and the two recently developed schemes PDDG/MNDO and PDDG/PM3. MNDO results only give one slightly short H...H distance, whereas the NDDO formalisms with modifications of the Core Repulsion Function (CRF) via Gaussians exhibit a large number of short to very short and unphysical H...H intermolecular distances. In contrast, the PM5 method, which is the successor to PM3, gives very promising results. Our ONIOM calculations indicate that the unphysical optimized geometries from PM3 are retained when this semiempirical method is used as the low level layer in a QM:QM formulation. On the other hand, ab initio methods involving good enough basis sets, at least for the high level layer in a hybrid ONIOM calculation, behave well, but they may be too expensive in practice for most supramolecular chemistry applications. Finally, the performance of the evaluated computational methods has also been tested by evaluating the energetic difference between the two most stable conformations of the host(beta-CD)-guest(HPMO) system. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 25: 99-105, 2004

  3. Advantages of Parallel Processing and the Effects of Communications Time

    NASA Technical Reports Server (NTRS)

    Eddy, Wesley M.; Allman, Mark

    2000-01-01

    Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.

  4. Digital PIV (DPIV) Software Analysis System

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A software package was developed to provide a Digital PIV (DPIV) capability for NASA LaRC. The system provides an automated image capture, test correlation, and autocorrelation analysis capability for the Kodak Megaplus 1.4 digital camera system for PIV measurements. The package includes three separate programs that, when used together with the PIV data validation algorithm, constitutes a complete DPIV analysis capability. The programs are run on an IBM PC/AT host computer running either Microsoft Windows 3.1 or Windows 95 using a 'quickwin' format that allows simple user interface and output capabilities to the windows environment.

  5. Evaluation of a Computerized Clinical Information System (Micromedex).

    PubMed Central

    Lundsgaarde, H. P.; Moreshead, G. E.

    1991-01-01

    This paper summarizes data collected as part of a project designed to identify and assess the technical and organizational problems associated with the implementation and evaluation of a Computerized Clinical Information System (CCIS), Micromedex, in three U.S. Department of Veterans Affairs Medical Centers (VAMCs). The study began in 1987 as a national effort to implement decision support technologies in the Veterans Administration Decentralized Hospital Computer Program (DHCP). The specific objectives of this project were to (1) examine one particular decision support technology, (2) identify the technical and organizational barriers to the implementation of a CCIS in the VA host environment, (3) assess the possible benefits of this system to VA clinicians in terms of therapeutic decision making, and (4) develop new methods for identifying the clinical utility of a computer program designed to provide clinicians with a new information tool. The project was conducted intermittently over a three-year period at three VA medical centers chosen as implementation and evaluation test sites for Micromedex. Findings from the Kansas City Medical Center in Missouri are presented to illustrate some of the technical problems associated with the implementation of a commercial database program in the DHCP host environment, the organizational factors influencing clinical use of the system, and the methods used to evaluate its use. Data from 4581 provider encounters with the CCIS are summarized. Usage statistics are presented to illustrate the methodological possibilities for assessing the "benefits and burdens" of a computerized information system by using an automated collection of user demographics and program audit trails that allow evaluators to monitor user interactions with different segments of the database. PMID:1807583

  6. Evaluation of a Computerized Clinical Information System (Micromedex).

    PubMed

    Lundsgaarde, H P; Moreshead, G E

    1991-01-01

    This paper summarizes data collected as part of a project designed to identify and assess the technical and organizational problems associated with the implementation and evaluation of a Computerized Clinical Information System (CCIS), Micromedex, in three U.S. Department of Veterans Affairs Medical Centers (VAMCs). The study began in 1987 as a national effort to implement decision support technologies in the Veterans Administration Decentralized Hospital Computer Program (DHCP). The specific objectives of this project were to (1) examine one particular decision support technology, (2) identify the technical and organizational barriers to the implementation of a CCIS in the VA host environment, (3) assess the possible benefits of this system to VA clinicians in terms of therapeutic decision making, and (4) develop new methods for identifying the clinical utility of a computer program designed to provide clinicians with a new information tool. The project was conducted intermittently over a three-year period at three VA medical centers chosen as implementation and evaluation test sites for Micromedex. Findings from the Kansas City Medical Center in Missouri are presented to illustrate some of the technical problems associated with the implementation of a commercial database program in the DHCP host environment, the organizational factors influencing clinical use of the system, and the methods used to evaluate its use. Data from 4581 provider encounters with the CCIS are summarized. Usage statistics are presented to illustrate the methodological possibilities for assessing the "benefits and burdens" of a computerized information system by using an automated collection of user demographics and program audit trails that allow evaluators to monitor user interactions with different segments of the database.

  7. Agent Based Fault Tolerance for the Mobile Environment

    NASA Astrophysics Data System (ADS)

    Park, Taesoon

    This paper presents a fault-tolerance scheme based on mobile agents for the reliable mobile computing systems. Mobility of the agent is suitable to trace the mobile hosts and the intelligence of the agent makes it efficient to support the fault tolerance services. This paper presents two approaches to implement the mobile agent based fault tolerant service and their performances are evaluated and compared with other fault-tolerant schemes.

  8. Securing BGP Using External Security Monitors

    DTIC Science & Technology

    2006-01-01

    forms. In Proc. SOSP, Brighton , UK , Oct. 2005. [19] A. Seshadri, A. Perrig, L. van Doorn, and P. Khosla. SWATT: Software-based Attestation for...Williams, E. G. Sirer, and F. B. Schnei- der. Nexus: A New Operating System for Trustwor- thy Computing (extended abstract). In Proc. SOSP, Brighton , UK ...as a distributed database of untrustworthy hosts or messages. An ESM that detects invalid behavior issues a certifi- cate describing the behavior or

  9. Computer Center Reference Manual. Volume 1

    DTIC Science & Technology

    1990-09-30

    Unlimited o- 0 0 91o1 UNCLASSI FI ED SECURITY CLASSIFICATION OF THIS PAGE REPORT DOCUMENTATION PAGE la . REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE...with connection to INTERNET ) (host tables allow transfer to some other networks) OASYS - the DTRC Office Automation System The following can be reached...and buffers, two windows, and some word processing commands. Advanced editing commands are entered through the use of a command line. EVE las its own

  10. Fusing MRI and Mechanical Imaging for Improved Prostate Cancer Diagnosis

    DTIC Science & Technology

    2016-10-01

    Western Reserve University. - PI is participating weekly Prostate Imaging Reporting and Data System meeting in the Department of Radiology, Case Medical...Literary Guild (LG) seminar, Case Western Reserve University. Hosted by PI’s mentor. - PI is participating the majority of Imaging Hour meeting...Ernest Feleppa4, Dean Barratt2, Lee Ponsky5, Anant Madabhushi1 1 Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve

  11. Distributing french seismologic data through the RESIF green IT datacentre

    NASA Astrophysics Data System (ADS)

    Volcke, P.; Gueguen, P.; Pequegnat, C.; Le Tanou, J.; Enderle, G.; Berthoud, F.

    2012-12-01

    RESIF is a nationwide french project aimed at building an excellent quality system to observe and understand the inner earth. The ultimate goal is to create a network throughout mainland France comprising 750 seismometers and geodetic measurement instruments, 250 of which will be mobile to enable the observation network to be focused on specific investigation subjects and geographic locations. This project includes the implementation of a data distribution centre hosting seismologic and geodetic data. This datacentre is operated by the Université Joseph Fourier, Grenoble, France. In the context of building the necessary computing infrastructure, the Université Joseph Fourier became the first french university earning the status of "Participant" for the European Union "Code of Conduct for Data Centres". The University commits to energy reporting and implementing best practices for energy efficiency, in a cost effective manner, without hampering mission critical functions. In this context, data currently hosted at the RESIF datacentre include data from french broadband permanent network, strong motion permanent network, and mobile seismological network. These data are freely accessible as realtime streams and continuous validated data, along with instrumental metadata, delivered using widely known formats. Futur developments include tight integration with local super-computing ressources, and setting up modern distribution systems like webservices.

  12. Converging free energies of binding in cucurbit[7]uril and octa-acid host-guest systems from SAMPL4 using expanded ensemble simulations

    NASA Astrophysics Data System (ADS)

    Monroe, Jacob I.; Shirts, Michael R.

    2014-04-01

    Molecular containers such as cucurbit[7]uril (CB7) and the octa-acid (OA) host are ideal simplified model test systems for optimizing and analyzing methods for computing free energies of binding intended for use with biologically relevant protein-ligand complexes. To this end, we have performed initially blind free energy calculations to determine the free energies of binding for ligands of both the CB7 and OA hosts. A subset of the selected guest molecules were those included in the SAMPL4 prediction challenge. Using expanded ensemble simulations in the dimension of coupling host-guest intermolecular interactions, we are able to show that our estimates in most cases can be demonstrated to fully converge and that the errors in our estimates are due almost entirely to the assigned force field parameters and the choice of environmental conditions used to model experiment. We confirm the convergence through the use of alternative simulation methodologies and thermodynamic pathways, analyzing sampled conformations, and directly observing changes of the free energy with respect to simulation time. Our results demonstrate the benefits of enhanced sampling of multiple local free energy minima made possible by the use of expanded ensemble molecular dynamics and may indicate the presence of significant problems with current transferable force fields for organic molecules when used for calculating binding affinities, especially in non-protein chemistries.

  13. Converging free energies of binding in cucurbit[7]uril and octa-acid host-guest systems from SAMPL4 using expanded ensemble simulations.

    PubMed

    Monroe, Jacob I; Shirts, Michael R

    2014-04-01

    Molecular containers such as cucurbit[7]uril (CB7) and the octa-acid (OA) host are ideal simplified model test systems for optimizing and analyzing methods for computing free energies of binding intended for use with biologically relevant protein-ligand complexes. To this end, we have performed initially blind free energy calculations to determine the free energies of binding for ligands of both the CB7 and OA hosts. A subset of the selected guest molecules were those included in the SAMPL4 prediction challenge. Using expanded ensemble simulations in the dimension of coupling host-guest intermolecular interactions, we are able to show that our estimates in most cases can be demonstrated to fully converge and that the errors in our estimates are due almost entirely to the assigned force field parameters and the choice of environmental conditions used to model experiment. We confirm the convergence through the use of alternative simulation methodologies and thermodynamic pathways, analyzing sampled conformations, and directly observing changes of the free energy with respect to simulation time. Our results demonstrate the benefits of enhanced sampling of multiple local free energy minima made possible by the use of expanded ensemble molecular dynamics and may indicate the presence of significant problems with current transferable force fields for organic molecules when used for calculating binding affinities, especially in non-protein chemistries.

  14. Modelling operations and security of cloud systems using Z-notation and Chinese Wall security policy

    NASA Astrophysics Data System (ADS)

    Basu, Srijita; Sengupta, Anirban; Mazumdar, Chandan

    2016-11-01

    Enterprises are increasingly using cloud computing for hosting their applications. Availability of fast Internet and cheap bandwidth are causing greater number of people to use cloud-based services. This has the advantage of lower cost and minimum maintenance. However, ensuring security of user data and proper management of cloud infrastructure remain major areas of concern. Existing techniques are either too complex, or fail to properly represent the actual cloud scenario. This article presents a formal cloud model using the constructs of Z-notation. Principles of the Chinese Wall security policy have been applied to design secure cloud-specific operations. The proposed methodology will enable users to safely host their services, as well as process sensitive data, on cloud.

  15. The Fate of Exoplanetary Systems and the Implications for White Dwarf Pollution

    NASA Astrophysics Data System (ADS)

    Veras, D.; Mustill, A. J.; Bonsor, A.; Wyatt, M. C.

    2013-09-01

    Mounting discoveries of extrasolar planets orbiting post-main-sequence stars motivate studies to understand the fate of these planets. Also, polluted white dwarfs (WDs) likely represent dynamically active systems at late times. Here, we perform full-lifetime simulations of one-, two- and three-planet systems from the endpoint of formation to several Gyr into the WD phase of the host star. We outline the physical and computational processes which must be considered for post-main-sequence planetary studies, and characterize the challenges in explaining the robust observational signatures of infrared excess in white dwarfs by appealing to late-stage planetary systems.

  16. Surgery applications of virtual reality

    NASA Technical Reports Server (NTRS)

    Rosen, Joseph

    1994-01-01

    Virtual reality is a computer-generated technology which allows information to be displayed in a simulated, bus lifelike, environment. In this simulated 'world', users can move and interact as if they were actually a part of that world. This new technology will be useful in many different fields, including the field of surgery. Virtual reality systems can be used to teach surgical anatomy, diagnose surgical problems, plan operations, simulate and perform surgical procedures (telesurgery), and predict the outcomes of surgery. The authors of this paper describe the basic components of a virtual reality surgical system. These components include: the virtual world, the virtual tools, the anatomical model, the software platform, the host computer, the interface, and the head-coupled display. In the chapter they also review the progress towards using virtual reality for surgical training, planning, telesurgery, and predicting outcomes. Finally, the authors present a training system being developed for the practice of new procedures in abdominal surgery.

  17. Ada Compiler Validation Summary Report. Certificate Number: 890118W1. 10017 Harris Corporation, Computer Systems Division Harris Ada, Version 5.0 Harris HCX-9 Host and Harris NH-3800 Target

    DTIC Science & Technology

    1989-01-17

    6Is OBsO.[il I J)A s3 0,2O-L,-01,-5601 UNCLASSIFIED Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate Number...United States Department of Defense Washington DC 20301-3081 Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate...O RE[PP" 9 PEA= COVELRD Ada Corpiler Validation SummT, ary Repor6:Hnrris 17 Jan 19S9 to 17 Jan 1990 Corporation, Computer SYLeIns Di%ision, Harris Ada

  18. Model Atmospheres and Spectral Irradiance Library of the Exoplanet Host Stars Observed in the MUSCLES Survey

    NASA Astrophysics Data System (ADS)

    Linsky, Jeffrey

    2017-08-01

    We propose to compute state-of-the-art model atmospheres (photospheres, chromospheres, transition regions and coronae) of the 4 K and 7 M exoplanet host stars observed by HST in the MUSCLES Treasury Survey, the nearest host star Proxima Centauri, and TRAPPIST-1. Our semi-empirical models will fit theunique high-resolution panchromatic (X-ray to infrared) spectra of these stars in the MAST High-Level Science Products archive consisting of COS and STIS UV spectra and near-simultaneous Chandra, XMM-Newton, and ground-based observations. We will compute models with the fully tested SSRPM computer software incorporating 52 atoms and ions in full non-LTE (435,986 spectral lines) and the 20 most-abundant diatomic molecules (about 2 million lines). This code has successfully fit the panchromatic spectrum of the M1.5 V exoplanet host star GJ 832 (Fontenla et al. 2016), the first M star with such a detailed model, and solar spectra. Our models will (1) predict the unobservable extreme-UV spectra, (2) determine radiative energy losses and balancing heating rates throughout these atmospheres, (3) compute a stellar irradiance library needed to describe the radiation environment of potentially habitable exoplanets to be studied by TESS and JWST, and (4) in the long post-HST era when UV observations will not be possible, the stellar irradiance library will be a powerful tool for predicting the panchromatic spectra of host stars that have only limited spectral coverage, in particular no UV spectra. The stellar models and spectral irradiance library will be placed quickly in MAST.

  19. Ada Compiler Validation Summary Report: Certificate Number: 890420W1. 10074 International Business Machines Corporation, IBM Development System for the Ada Language MVS Ada Compiler, Version 2.1.1 IBM 4381 (Host and Target)

    DTIC Science & Technology

    1989-04-20

    20. ARS1AAI . (Contimne on reverse side olnetessary *rwenPtif) by bfoci nur~be’) International Business Machines Corporation, IBM Development System...Number: AVF-VSR-261.0789 89-01-26-TEL Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 890420W1.10074 International Business Machines...computer. The compiler was tested using command scripts provided by International Business Machines Corporation and reviewed by the validation team. The

  20. Tinker-OpenMM: Absolute and relative alchemical free energies using AMOEBA on GPUs.

    PubMed

    Harger, Matthew; Li, Daniel; Wang, Zhi; Dalby, Kevin; Lagardère, Louis; Piquemal, Jean-Philip; Ponder, Jay; Ren, Pengyu

    2017-09-05

    The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU-based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker-OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200-fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host-guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Analyses of Brucella Pathogenesis, Host Immunity, and Vaccine Targets using Systems Biology and Bioinformatics

    PubMed Central

    He, Yongqun

    2011-01-01

    Brucella is a Gram-negative, facultative intracellular bacterium that causes zoonotic brucellosis in humans and various animals. Out of 10 classified Brucella species, B. melitensis, B. abortus, B. suis, and B. canis are pathogenic to humans. In the past decade, the mechanisms of Brucella pathogenesis and host immunity have been extensively investigated using the cutting edge systems biology and bioinformatics approaches. This article provides a comprehensive review of the applications of Omics (including genomics, transcriptomics, and proteomics) and bioinformatics technologies for the analysis of Brucella pathogenesis, host immune responses, and vaccine targets. Based on more than 30 sequenced Brucella genomes, comparative genomics is able to identify gene variations among Brucella strains that help to explain host specificity and virulence differences among Brucella species. Diverse transcriptomics and proteomics gene expression studies have been conducted to analyze gene expression profiles of wild type Brucella strains and mutants under different laboratory conditions. High throughput Omics analyses of host responses to infections with virulent or attenuated Brucella strains have been focused on responses by mouse and cattle macrophages, bovine trophoblastic cells, mouse and boar splenocytes, and ram buffy coat. Differential serum responses in humans and rams to Brucella infections have been analyzed using high throughput serum antibody screening technology. The Vaxign reverse vaccinology has been used to predict many Brucella vaccine targets. More than 180 Brucella virulence factors and their gene interaction networks have been identified using advanced literature mining methods. The recent development of community-based Vaccine Ontology and Brucellosis Ontology provides an efficient way for Brucella data integration, exchange, and computer-assisted automated reasoning. PMID:22919594

  2. Analyses of Brucella pathogenesis, host immunity, and vaccine targets using systems biology and bioinformatics.

    PubMed

    He, Yongqun

    2012-01-01

    Brucella is a Gram-negative, facultative intracellular bacterium that causes zoonotic brucellosis in humans and various animals. Out of 10 classified Brucella species, B. melitensis, B. abortus, B. suis, and B. canis are pathogenic to humans. In the past decade, the mechanisms of Brucella pathogenesis and host immunity have been extensively investigated using the cutting edge systems biology and bioinformatics approaches. This article provides a comprehensive review of the applications of Omics (including genomics, transcriptomics, and proteomics) and bioinformatics technologies for the analysis of Brucella pathogenesis, host immune responses, and vaccine targets. Based on more than 30 sequenced Brucella genomes, comparative genomics is able to identify gene variations among Brucella strains that help to explain host specificity and virulence differences among Brucella species. Diverse transcriptomics and proteomics gene expression studies have been conducted to analyze gene expression profiles of wild type Brucella strains and mutants under different laboratory conditions. High throughput Omics analyses of host responses to infections with virulent or attenuated Brucella strains have been focused on responses by mouse and cattle macrophages, bovine trophoblastic cells, mouse and boar splenocytes, and ram buffy coat. Differential serum responses in humans and rams to Brucella infections have been analyzed using high throughput serum antibody screening technology. The Vaxign reverse vaccinology has been used to predict many Brucella vaccine targets. More than 180 Brucella virulence factors and their gene interaction networks have been identified using advanced literature mining methods. The recent development of community-based Vaccine Ontology and Brucellosis Ontology provides an efficient way for Brucella data integration, exchange, and computer-assisted automated reasoning.

  3. In silico identification of molecular mimics involved in the pathogenesis of Clostridium botulinum ATCC 3502 strain.

    PubMed

    Bhardwaj, Tulika; Haque, Shafiul; Somvanshi, Pallavi

    2018-05-12

    Bacterial pathogens invade and disrupt the host defense system by means of protein sequences structurally similar at global and local level both. The sharing of homologous sequences between the host and the pathogenic bacteria mediates the infection and defines the concept of molecular mimicry. In this study, various computational approaches were employed to elucidate the pathogenicity of Clostridium botulinum ATCC 3502 at genome-wide level. Genome-wide study revealed that the pathogen mimics the host (Homo sapiens) and unraveled the complex pathogenic pathway of causing infection. The comparative 'omics' approaches helped in selective screening of 'molecular mimicry' candidates followed by the qualitative assessment of the virulence potential and functional enrichment. Overall, this study provides a deep insight into the emergence and surveillance of multidrug resistant C. botulinum ATCC 3502 caused infections. This is the very first report identifying C. botulinum ATCC 3502 proteome enriched similarities to the human host proteins and resulted in the identification of 20 potential mimicry candidates, which were further characterized qualitatively by sub-cellular organization prediction and functional annotation. This study will provide a variety of avenues for future studies related to infectious agents, host-pathogen interactions and the evolution of pathogenesis process. Copyright © 2018. Published by Elsevier Ltd.

  4. Mapping Protein Interactions between Dengue Virus and Its Human and Insect Hosts

    PubMed Central

    Doolittle, Janet M.; Gomez, Shawn M.

    2011-01-01

    Background Dengue fever is an increasingly significant arthropod-borne viral disease, with at least 50 million cases per year worldwide. As with other viral pathogens, dengue virus is dependent on its host to perform the bulk of functions necessary for viral survival and replication. To be successful, dengue must manipulate host cell biological processes towards its own ends, while avoiding elimination by the immune system. Protein-protein interactions between the virus and its host are one avenue through which dengue can connect and exploit these host cellular pathways and processes. Methodology/Principal Findings We implemented a computational approach to predict interactions between Dengue virus (DENV) and both of its hosts, Homo sapiens and the insect vector Aedes aegypti. Our approach is based on structural similarity between DENV and host proteins and incorporates knowledge from the literature to further support a subset of the predictions. We predict over 4,000 interactions between DENV and humans, as well as 176 interactions between DENV and A. aegypti. Additional filtering based on shared Gene Ontology cellular component annotation reduced the number of predictions to approximately 2,000 for humans and 18 for A. aegypti. Of 19 experimentally validated interactions between DENV and humans extracted from the literature, this method was able to predict nearly half (9). Additional predictions suggest specific interactions between virus and host proteins relevant to interferon signaling, transcriptional regulation, stress, and the unfolded protein response. Conclusions/Significance Dengue virus manipulates cellular processes to its advantage through specific interactions with the host's protein interaction network. The interaction networks presented here provide a set of hypothesis for further experimental investigation into the DENV life cycle as well as potential therapeutic targets. PMID:21358811

  5. Simulation of a master-slave event set processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comfort, J.C.

    1984-03-01

    Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less

  6. Efficient operating system level virtualization techniques for cloud resources

    NASA Astrophysics Data System (ADS)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  7. Development of digital interactive processing system for NOAA satellites AVHRR data

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Murthy, N. N.

    The paper discusses the digital image processing system for NOAA/AVHRR data including Land applications - configured around VAX 11/750 host computer supported with FPS 100 Array Processor, Comtal graphic display and HP Plotting devices; wherein the system software for relational Data Base together with query and editing facilities, Man-Machine Interface using form, menu and prompt inputs including validation of user entries for data type and range; preprocessing software for data calibration, Sun-angle correction, Geometric Corrections for Earth curvature effect and Earth rotation offsets and Earth location of AVHRR image have been accomplished. The implemented image enhancement techniques such as grey level stretching, histogram equalization and convolution are discussed. The software implementation details for the computation of vegetative index and normalized vegetative index using NOAA/AVHRR channels 1 and 2 data together with output are presented; scientific background for such computations and obtainability of similar indices from Landsat/MSS data are also included. The paper concludes by specifying the further software developments planned and the progress envisaged in the field of vegetation index studies.

  8. Computing conformational free energy differences in explicit solvent: An efficient thermodynamic cycle using an auxiliary potential and a free energy functional constructed from the end points.

    PubMed

    Harris, Robert C; Deng, Nanjie; Levy, Ronald M; Ishizuka, Ryosuke; Matubayasi, Nobuyuki

    2017-06-05

    Many biomolecules undergo conformational changes associated with allostery or ligand binding. Observing these changes in computer simulations is difficult if their timescales are long. These calculations can be accelerated by observing the transition on an auxiliary free energy surface with a simpler Hamiltonian and connecting this free energy surface to the target free energy surface with free energy calculations. Here, we show that the free energy legs of the cycle can be replaced with energy representation (ER) density functional approximations. We compute: (1) The conformational free energy changes for alanine dipeptide transitioning from the right-handed free energy basin to the left-handed basin and (2) the free energy difference between the open and closed conformations of β-cyclodextrin, a "host" molecule that serves as a model for molecular recognition in host-guest binding. β-cyclodextrin contains 147 atoms compared to 22 atoms for alanine dipeptide, making β-cyclodextrin a large molecule for which to compute solvation free energies by free energy perturbation or integration methods and the largest system for which the ER method has been compared to exact free energy methods. The ER method replaced the 28 simulations to compute each coupling free energy with two endpoint simulations, reducing the computational time for the alanine dipeptide calculation by about 70% and for the β-cyclodextrin by > 95%. The method works even when the distribution of conformations on the auxiliary free energy surface differs substantially from that on the target free energy surface, although some degree of overlap between the two surfaces is required. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications

    PubMed Central

    Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.

    2015-01-01

    In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987

  10. Rapid earthquake characterization using MEMS accelerometers and volunteer hosts following the M 7.2 Darfield, New Zealand, Earthquake

    USGS Publications Warehouse

    Lawrence, J. F.; Cochran, E.S.; Chung, A.; Kaiser, A.; Christensen, C. M.; Allen, R.; Baker, J.W.; Fry, B.; Heaton, T.; Kilb, Debi; Kohler, M.D.; Taufer, M.

    2014-01-01

    We test the feasibility of rapidly detecting and characterizing earthquakes with the Quake‐Catcher Network (QCN) that connects low‐cost microelectromechanical systems accelerometers to a network of volunteer‐owned, Internet‐connected computers. Following the 3 September 2010 M 7.2 Darfield, New Zealand, earthquake we installed over 180 QCN sensors in the Christchurch region to record the aftershock sequence. The sensors are monitored continuously by the host computer and send trigger reports to the central server. The central server correlates incoming triggers to detect when an earthquake has occurred. The location and magnitude are then rapidly estimated from a minimal set of received ground‐motion parameters. Full seismic time series are typically not retrieved for tens of minutes or even hours after an event. We benchmark the QCN real‐time detection performance against the GNS Science GeoNet earthquake catalog. Under normal network operations, QCN detects and characterizes earthquakes within 9.1 s of the earthquake rupture and determines the magnitude within 1 magnitude unit of that reported in the GNS catalog for 90% of the detections.

  11. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    PubMed Central

    Florence, A. Paulin; Shanthi, V.; Simon, C. B. Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. PMID:27239551

  12. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud.

    PubMed

    Florence, A Paulin; Shanthi, V; Simon, C B Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a "Pay as you go" basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.

  13. Reconfigurable Processing Module

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin; Hodson, Robert; Jones, Robert; Williams, John

    2005-01-01

    To accommodate a wide spectrum of applications and technologies, NASA s Exploration System's Missions Directorate has called for reconfigurable and modular technologies to support future missions to the moon and Mars. In response, Langley Research Center is leading a program entitled Reconfigurable Scaleable Computing (RSC) that is centered on the development of FPGA-based computing resources in a stackable form factor. This paper details the architecture and implementation of the Reconfigurable Processing Module (RPM), which is the key element of the RSC system. The RPM is an FPGA-based, space-qualified printed circuit assembly leveraging terrestrial/commercial design standards into the space applications domain. The form factor is similar to, and backwards compatible with, the PCI-104 standard utilizing only the PCI interface. The size is expanded to accommodate the required functionality while still better than 30% smaller than a 3U CompactPCI(TradeMark)card and without the overhead of the backplane. The architecture is built around two FPGA devices, one hosting PCI and memory interfaces, and another hosting mission application resources; both of which are connected with a high-speed data bus. The PCI interface FPGA provides access via the PCI bus to onboard SDRAM, flash PROM, and the application resources; both configuration management as well as runtime interaction. The reconfigurable FPGA, referred to as the Application FPGA - or simply "the application" - is a radiation-tolerant Xilinx Virtex-4 FX60 hosting custom application specific logic or soft microprocessor IP. The RPM implements various SEE mitigation techniques including TMR, EDAC, and configuration scrubbing of the reconfigurable FPGA. Prototype hardware and formal modeling techniques are used to explore the performability trade space. These models provide a novel way to calculate quality-of-service performance measures while simultaneously considering fault-related behavior due to SEE soft errors.

  14. Structural Transition and Antibody Binding of EBOV GP and ZIKV E Proteins from Pre-Fusion to Fusion-Initiation State.

    PubMed

    Lappala, Anna; Nishima, Wataru; Miner, Jacob; Fenimore, Paul; Fischer, Will; Hraber, Peter; Zhang, Ming; McMahon, Benjamin; Tung, Chang-Shung

    2018-05-10

    Membrane fusion proteins are responsible for viral entry into host cells—a crucial first step in viral infection. These proteins undergo large conformational changes from pre-fusion to fusion-initiation structures, and, despite differences in viral genomes and disease etiology, many fusion proteins are arranged as trimers. Structural information for both pre-fusion and fusion-initiation states is critical for understanding virus neutralization by the host immune system. In the case of Ebola virus glycoprotein (EBOV GP) and Zika virus envelope protein (ZIKV E), pre-fusion state structures have been identified experimentally, but only partial structures of fusion-initiation states have been described. While the fusion-initiation structure is in an energetically unfavorable state that is difficult to solve experimentally, the existing structural information combined with computational approaches enabled the modeling of fusion-initiation state structures of both proteins. These structural models provide an improved understanding of four different neutralizing antibodies in the prevention of viral host entry.

  15. Supramolecular binding and separation of hydrocarbons within a functionalized porous metal–organic framework

    DOE PAGES

    Yang, Sihai; Ramirez-Cuesta, Anibal J.; Newby, Ruth; ...

    2014-12-01

    Supramolecular interactions are fundamental to host–guest binding in many chemical and biological processes. Direct visualization of such supramolecular interactions within host–guest systems is extremely challenging, but crucial to understanding their function. Within this paper, we report a comprehensive study that combines neutron scattering, synchrotron X-ray and neutron diffraction, and computational modelling to define the detailed binding at a molecular level of acetylene, ethylene and ethane within the porous host NOTT-300. This study reveals simultaneous and cooperative hydrogen-bonding, π···π stacking interactions and intermolecular dipole interactions in the binding of acetylene and ethylene to give up to 12 individual weak supramolecular interactionsmore » aligned within the host to form an optimal geometry for the selective binding of hydrocarbons. In addition, we also report the cooperative binding of a mixture of acetylene and ethylene within the porous host, together with the corresponding breakthrough experiments and analysis of adsorption isotherms of gas mixtures.« less

  16. Models of microbiome evolution incorporating host and microbial selection.

    PubMed

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong parental contribution, when host-mediated selection acts on microbes concomitantly. We present a computational framework that integrates different selective processes acting on the evolution of microbiomes. Our framework demonstrates that selection acting on microbes can have a strong effect on microbial diversities and fitnesses, whereas selection on hosts can have weaker outcomes.

  17. Real time imaging and infrared background scene analysis using the Naval Postgraduate School infrared search and target designation (NPS-IRSTD) system

    NASA Astrophysics Data System (ADS)

    Bernier, Jean D.

    1991-09-01

    The imaging in real time of infrared background scenes with the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) System was achieved through extensive software developments in protected mode assembly language on an Intel 80386 33 MHz computer. The new software processes the 512 by 480 pixel images directly in the extended memory area of the computer where the DT-2861 frame grabber memory buffers are mapped. Direct interfacing, through a JDR-PR10 prototype card, between the frame grabber and the host computer AT bus enables each load of the frame grabber memory buffers to be effected under software control. The protected mode assembly language program can refresh the display of a six degree pseudo-color sector in the scanner rotation within the two second period of the scanner. A study of the imaging properties of the NPS-IRSTD is presented with preliminary work on image analysis and contrast enhancement of infrared background scenes.

  18. Local free energies for the coarse-graining of adsorption phenomena: The interacting pair approximation

    NASA Astrophysics Data System (ADS)

    Pazzona, Federico G.; Pireddu, Giovanni; Gabrieli, Andrea; Pintus, Alberto M.; Demontis, Pierfranco

    2018-05-01

    We investigate the coarse-graining of host-guest systems under the perspective of the local distribution of pore occupancies, along with the physical meaning and actual computability of the coarse-interaction terms. We show that the widely accepted approach, in which the contributions to the free energy given by the molecules located in two neighboring pores are estimated through Monte Carlo simulations where the two pores are kept separated from the rest of the system, leads to inaccurate results at high sorbate densities. In the coarse-graining strategy that we propose, which is based on the Bethe-Peierls approximation, density-independent interaction terms are instead computed according to local effective potentials that take into account the correlations between the pore pair and its surroundings by means of mean-field correction terms without the need for simulating the pore pair separately. Use of the interaction parameters obtained this way allows the coarse-grained system to reproduce more closely the equilibrium properties of the original one. Results are shown for lattice-gases where the local free energy can be computed exactly and for a system of Lennard-Jones particles under the effect of a static confining field.

  19. Fuzzy logic based robotic controller

    NASA Technical Reports Server (NTRS)

    Attia, F.; Upadhyaya, M.

    1994-01-01

    Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.

  20. Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 6: IPAD system development and operation

    NASA Technical Reports Server (NTRS)

    Redhed, D. D.; Tripp, L. L.; Kawaguchi, A. S.; Miller, R. E., Jr.

    1973-01-01

    The strategy of the IPAD implementation plan presented, proposes a three phase development of the IPAD system and technical modules, and the transfer of this capability from the development environment to the aerospace vehicle design environment. The system and technical module capabilities for each phase of development are described. The system and technical module programming languages are recommended as well as the initial host computer system hardware and operating system. The cost of developing the IPAD technology is estimated. A schedule displaying the flowtime required for each development task is given. A PERT chart gives the developmental relationships of each of the tasks and an estimate of the operational cost of the IPAD system is offered.

  1. 78 FR 54453 - Notice of Public Meeting-Intersection of Cloud Computing and Mobility Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ...--Intersection of Cloud Computing and Mobility Forum and Workshop AGENCY: National Institute of Standards and.../intersection-of-cloud-and-mobility.cfm . SUPPLEMENTARY INFORMATION: NIST hosted six prior Cloud Computing Forum... interoperability, portability, and security, discuss the Federal Government's experience with cloud computing...

  2. Systems analysis of multiple regulator perturbations allows discovery of virulence factors in Salmonella

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Hyunjin; Ansong, Charles; McDermott, Jason E.

    Background: Systemic bacterial infections are highly regulated and complex processes that are orchestrated by numerous virulence factors. Genes that are coordinately controlled by the set of regulators required for systemic infection are potentially required for pathogenicity. Results: In this study we present a systems biology approach in which sample-matched multi-omic measurements of fourteen virulence-essential regulator mutants were coupled with computational network analysis to efficiently identify Salmonella virulence factors. Immunoblot experiments verified network-predicted virulence factors and a subset was determined to be secreted into the host cytoplasm, suggesting that they are virulence factors directly interacting with host cellular components. Two ofmore » these, SrfN and PagK2, were required for full mouse virulence and were shown to be translocated independent of either of the type III secretion systems in Salmonella or the type III injectisome-related flagellar mechanism. Conclusions: Integrating multi-omic datasets from Salmonella mutants lacking virulence regulators not only identified novel virulence factors but also defined a new class of translocated effectors involved in pathogenesis. The success of this strategy at discovery of known and novel virulence factors suggests that the approach may have applicability for other bacterial pathogens.« less

  3. Integrated large view angle hologram system with multi-slm

    NASA Astrophysics Data System (ADS)

    Yang, ChengWei; Liu, Juan

    2017-10-01

    Recently holographic display has attracted much attention for its ability to generate real-time 3D reconstructed image. CGH provides an effective way to produce hologram, and spacial light modulator (SLM) is used to reconstruct the image. However the reconstructing system is usually very heavy and complex, and the view-angle is limited by the pixel size and spatial bandwidth product (SBP) of the SLM. In this paper a light portable holographic display system is proposed by integrating the optical elements and host computer units.Which significantly reduces the space taken in horizontal direction. CGH is produced based on the Fresnel diffraction and point source method. To reduce the memory usage and image distortion, we use an optimized accurate compressed look up table method (AC-LUT) to compute the hologram. In the system, six SLMs are concatenated to a curved plane, each one loading the phase-only hologram in a different angle of the object, the horizontal view-angle of the reconstructed image can be expanded to about 21.8°.

  4. Oxygen Modulates the Effectiveness of Granuloma Mediated Host Response to Mycobacterium tuberculosis: A Multiscale Computational Biology Approach

    PubMed Central

    Sershen, Cheryl L.; Plimpton, Steven J.; May, Elebeoba E.

    2016-01-01

    Mycobacterium tuberculosis associated granuloma formation can be viewed as a structural immune response that can contain and halt the spread of the pathogen. In several mammalian hosts, including non-human primates, Mtb granulomas are often hypoxic, although this has not been observed in wild type murine infection models. While a presumed consequence, the structural contribution of the granuloma to oxygen limitation and the concomitant impact on Mtb metabolic viability and persistence remains to be fully explored. We develop a multiscale computational model to test to what extent in vivo Mtb granulomas become hypoxic, and investigate the effects of hypoxia on host immune response efficacy and mycobacterial persistence. Our study integrates a physiological model of oxygen dynamics in the extracellular space of alveolar tissue, an agent-based model of cellular immune response, and a systems biology-based model of Mtb metabolic dynamics. Our theoretical studies suggest that the dynamics of granuloma organization mediates oxygen availability and illustrates the immunological contribution of this structural host response to infection outcome. Furthermore, our integrated model demonstrates the link between structural immune response and mechanistic drivers influencing Mtbs adaptation to its changing microenvironment and the qualitative infection outcome scenarios of clearance, containment, dissemination, and a newly observed theoretical outcome of transient containment. We observed hypoxic regions in the containment granuloma similar in size to granulomas found in mammalian in vivo models of Mtb infection. In the case of the containment outcome, our model uniquely demonstrates that immune response mediated hypoxic conditions help foster the shift down of bacteria through two stages of adaptation similar to thein vitro non-replicating persistence (NRP) observed in the Wayne model of Mtb dormancy. The adaptation in part contributes to the ability of Mtb to remain dormant for years after initial infection. PMID:26913242

  5. Oxygen Modulates the Effectiveness of Granuloma Mediated Host Response to Mycobacterium tuberculosis: A Multiscale Computational Biology Approach.

    PubMed

    Sershen, Cheryl L; Plimpton, Steven J; May, Elebeoba E

    2016-01-01

    Mycobacterium tuberculosis associated granuloma formation can be viewed as a structural immune response that can contain and halt the spread of the pathogen. In several mammalian hosts, including non-human primates, Mtb granulomas are often hypoxic, although this has not been observed in wild type murine infection models. While a presumed consequence, the structural contribution of the granuloma to oxygen limitation and the concomitant impact on Mtb metabolic viability and persistence remains to be fully explored. We develop a multiscale computational model to test to what extent in vivo Mtb granulomas become hypoxic, and investigate the effects of hypoxia on host immune response efficacy and mycobacterial persistence. Our study integrates a physiological model of oxygen dynamics in the extracellular space of alveolar tissue, an agent-based model of cellular immune response, and a systems biology-based model of Mtb metabolic dynamics. Our theoretical studies suggest that the dynamics of granuloma organization mediates oxygen availability and illustrates the immunological contribution of this structural host response to infection outcome. Furthermore, our integrated model demonstrates the link between structural immune response and mechanistic drivers influencing Mtbs adaptation to its changing microenvironment and the qualitative infection outcome scenarios of clearance, containment, dissemination, and a newly observed theoretical outcome of transient containment. We observed hypoxic regions in the containment granuloma similar in size to granulomas found in mammalian in vivo models of Mtb infection. In the case of the containment outcome, our model uniquely demonstrates that immune response mediated hypoxic conditions help foster the shift down of bacteria through two stages of adaptation similar to the in vitro non-replicating persistence (NRP) observed in the Wayne model of Mtb dormancy. The adaptation in part contributes to the ability of Mtb to remain dormant for years after initial infection.

  6. A modularized pulse programmer for NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Mao, Wenping; Bao, Qingjia; Yang, Liang; Chen, Yiqun; Liu, Chaoyang; Qiu, Jianqing; Ye, Chaohui

    2011-02-01

    A modularized pulse programmer for a NMR spectrometer is described. It consists of a networked PCI-104 single-board computer and a field programmable gate array (FPGA). The PCI-104 is dedicated to translate the pulse sequence elements from the host computer into 48-bit binary words and download these words to the FPGA, while the FPGA functions as a sequencer to execute these binary words. High-resolution NMR spectra obtained on a home-built spectrometer with four pulse programmers working concurrently demonstrate the effectiveness of the pulse programmer. Advantages of the module include (1) once designed it can be duplicated and used to construct a scalable NMR/MRI system with multiple transmitter and receiver channels, (2) it is a totally programmable system in which all specific applications are determined by software, and (3) it provides enough reserve for possible new pulse sequences.

  7. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  8. A Novel Centrality Measure for Network-wide Cyber Vulnerability Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathanur, Arun V.; Haglin, David J.

    In this work we propose a novel formulation that models the attack and compromise on a cyber network as a combination of two parts - direct compromise of a host and the compromise occurring through the spread of the attack on the network from a compromised host. The model parameters for the nodes are a concise representation of the host profiles that can include the risky behaviors of the associated human users while the model parameters for the edges are based on the existence of vulnerabilities between each pair of connected hosts. The edge models relate to the summary representationsmore » of the corresponding attack-graphs. This results in a formulation based on Random Walk with Restart (RWR) and the resulting centrality metric can be solved for in an efficient manner through the use of sparse linear solvers. Thus the formulation goes beyond mere topological considerations in centrality computations by summarizing the host profiles and the attack graphs into the model parameters. The computational efficiency of the method also allows us to also quantify the uncertainty in the centrality measure through Monte Carlo analysis.« less

  9. Co-scheduling of network resource provisioning and host-to-host bandwidth reservation on high-performance network and storage systems

    DOEpatents

    Yu, Dantong; Katramatos, Dimitrios; Sim, Alexander; Shoshani, Arie

    2014-04-22

    A cross-domain network resource reservation scheduler configured to schedule a path from at least one end-site includes a management plane device configured to monitor and provide information representing at least one of functionality, performance, faults, and fault recovery associated with a network resource; a control plane device configured to at least one of schedule the network resource, provision local area network quality of service, provision local area network bandwidth, and provision wide area network bandwidth; and a service plane device configured to interface with the control plane device to reserve the network resource based on a reservation request and the information from the management plane device. Corresponding methods and computer-readable medium are also disclosed.

  10. The effects of host diversity on vector-borne disease: the conditions under which diversity will amplify or dilute the disease risk.

    PubMed

    Miller, Ezer; Huppert, Amit

    2013-01-01

    Multihost vector-borne infectious diseases form a significant fraction of the global infectious disease burden. In this study we explore the relationship between host diversity, vector behavior, and disease risk. To this end, we have developed a new dynamic model which includes two distinct host species and one vector species with variable preferences. With the aid of the model we were able to compute the basic reproductive rate, R 0, a well-established measure of disease risk that serves as a threshold parameter for disease outbreak. The model analysis reveals that the system has two different qualitative behaviors: (i) the well-known dilution effect, where the maximal R0 is obtained in a community which consists a single host (ii) a new amplification effect, denoted by us as diversity amplification, where the maximal R0 is attained in a community which consists both hosts. The model analysis extends on previous results by underlining the mechanism of both, diversity amplification and the dilution, and specifies the exact conditions for their occurrence. We have found that diversity amplification occurs where the vector prefers the host with the highest transmission ability, and dilution is obtained when the vector does not show any preference, or it prefers to bite the host with the lower transmission ability. The mechanisms of dilution and diversity amplification are able to account for the different and contradictory patterns often observed in nature (i.e., in some cases disease risk is increased while in other is decreased when the diversity is increased). Implication of the diversity amplification mechanism also challenges current premises about the interaction between biodiversity, climate change, and disease risk and calls for retrospective thinking in planning intervention policies aimed at protecting the preferred host species.

  11. A technique for incorporating the NASA spacelab payload dedicated experiment processor software into the simulation system for the payload crew training complex

    NASA Technical Reports Server (NTRS)

    Bremmer, D. A.

    1986-01-01

    The feasibility of some off-the-shelf microprocessors and state-of-art software is assessed (1) as a development system for the principle investigator (pi) in the design of the experiment model, (2) as an example of available technology application for future PI's experiments, (3) as a system capable of being interactive in the PCTC's simulation of the dedicated experiment processor (DEP), preferably by bringing the PI's DEP software directly into the simulation model, (4) as a system having bus compatibility with host VAX simulation computers, (5) as a system readily interfaced with mock-up panels and information displays, and (6) as a functional system for post mission data analysis.

  12. Parasitology tutoring system: a hypermedia computer-based application.

    PubMed

    Theodoropoulos, G; Loumos, V

    1994-02-14

    The teaching of parasitology is a basic course in all life sciences curricula, and up to now no computer-assisted tutoring system has been developed for this purpose. By using Knowledge Pro, an object-oriented software development tool, a hypermedia tutoring system for teaching parasitology to college students was developed. Generally, a tutoring system contains a domain expert, a student model, a pedagogical expert and the user interface. In this project, particular emphasis was given to the user interface design and the expert knowledge representation. The system allows access to the educational material through hypermedia and indexing at the pace of the student. The hypermedia access is facilitated through key words defined as hypertext and objects in pictures defined as hyper-areas. The indexing access is based on a list of parameters that refers to various characteristics of the parasites, e.g. taxonomy, host, organ, etc. In addition, this indexing access can be used for testing the student's level of understanding. The advantages of this system are its user-friendliness, graphical interface and ability to incorporate new educational material in the area of parasitology.

  13. A smarter way to search, share and utilize open-spatial online data for energy R&D - Custom machine learning and GIS tools in U.S. DOE's virtual data library & laboratory, EDX

    NASA Astrophysics Data System (ADS)

    Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.

    2017-12-01

    As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental, social, and economic impacts, highlight knowledge or technology gaps, and reduce uncertainty for a range of `what if' scenarios relevant to oil spill prevention efforts. These examples illustrate EDX's growing capabilities for advanced spatial data search and analysis to support geo-data science needs.

  14. Simulink-aided Design and Implementation of Sensorless BLDC Motor Digital Control System

    NASA Astrophysics Data System (ADS)

    Zhilenkov, A. A.; Tsvetkov, Y. N.; Chistov, V. B.; Nyrkov, A. P.; Sokolov, S. S.

    2017-07-01

    The paper describes the process of creating of brushless direct current motor’s digital control system. The target motor has no speed sensor, so back-EMF method is used for commutation control. Authors show how to model the control system in MatLab/Simulink and to test it onboard STM32F4 microcontroller.This technology allows to create the most flexible system, which will control possible with a personal computer by communication lines. It is possible to examine the signals in the circuit of the actuator without any external measuring instruments - testers, oscilloscopes, etc. - and output waveforms and measured values of signals directly on the host PC.

  15. Optoelectronic date acquisition system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Xin; Liu, Chunyang; Song, De; Tong, Zhiguo; Liu, Xiangqing

    2015-11-01

    An optoelectronic date acquisition system is designed based on FPGA. FPGA chip that is EP1C3T144C8 of Cyclone devices from Altera corporation is used as the centre of logic control, XTP2046 chip is used as A/D converter, host computer that communicates with the date acquisition system through RS-232 serial communication interface are used as display device and photo resistance is used as photo sensor. We use Verilog HDL to write logic control code about FPGA. It is proved that timing sequence is correct through the simulation of ModelSim. Test results indicate that this system meets the design requirement, has fast response and stable operation by actual hardware circuit test.

  16. Overview of the land analysis system (LAS)

    USGS Publications Warehouse

    Quirk, Bruce K.; Olseson, Lyndon R.

    1987-01-01

    The Land Analysis System (LAS) is a fully integrated digital analysis system designed to support remote sensing, image processing, and geographic information systems research. LAS is being developed through a cooperative effort between the National Aeronautics and Space Administration Goddard Space Flight Center and the U. S. Geological Survey Earth Resources Observation Systems (EROS) Data Center. LAS has over 275 analysis modules capable to performing input and output, radiometric correction, geometric registration, signal processing, logical operations, data transformation, classification, spatial analysis, nominal filtering, conversion between raster and vector data types, and display manipulation of image and ancillary data. LAS is currently implant using the Transportable Applications Executive (TAE). While TAE was designed primarily to be transportable, it still provides the necessary components for a standard user interface, terminal handling, input and output services, display management, and intersystem communications. With TAE the analyst uses the same interface to the processing modules regardless of the host computer or operating system. LAS was originally implemented at EROS on a Digital Equipment Corporation computer system under the Virtual Memorial System operating system with DeAnza displays and is presently being converted to run on a Gould Power Node and Sun workstation under the Berkeley System Distribution UNIX operating system.

  17. Host social organization and mating system shape parasite transmission opportunities in three European bat species.

    PubMed

    van Schaik, J; Kerth, G

    2017-02-01

    For non-mobile parasites living on social hosts, infection dynamics are strongly influenced by host life history and social system. We explore the impact of host social systems on parasite population dynamics by comparing the infection intensity and transmission opportunities of three mite species of the genus Spinturnix across their three European bat hosts (Myotis daubentonii, Myotis myotis, Myotis nattereri) during the bats' autumn mating season. Mites mainly reproduce in host maternity colonies in summer, but as these colonies are closed, opportunities for inter-colony transmission are limited to host interactions during the autumn mating season. The three investigated hosts differ considerably in their social system, most notably in maternity colony size, mating system, and degree of male summer aggregation. We observed marked differences in parasite infection during the autumn mating period between the species, closely mirroring the predictions made based on the social systems of the hosts. Increased host aggregation sizes in summer yielded higher overall parasite prevalence and intensity, both in male and female hosts. Moreover, parasite levels in male hosts differentially increased throughout the autumn mating season in concordance with the degree of contact with female hosts afforded by the different mating systems of the hosts. Critically, the observed host-specific differences have important consequences for parasite population structure and will thus affect the coevolutionary dynamics between the interacting species. Therefore, in order to accurately characterize host-parasite dynamics in hosts with complex social systems, a holistic approach that investigates parasite infection and transmission across all periods is warranted.

  18. The Fate of Exoplanets and the Red Giant Rapid Rotator Connection

    NASA Astrophysics Data System (ADS)

    Carlberg, Joleen K.; Majewski, Steven R.; Arras, Phil; Smith, Verne V.; Cunha, Katia; Bizyaev, Dmitry

    2011-03-01

    We have computed the fate of exoplanet companions around main sequence stars to explore the frequency of planet ingestion by their host stars during the red giant branch evolution. Using published properties of exoplanetary systems combined with stellar evolution models and Zahn's theory of tidal friction, we modeled the tidal decay of the planets' orbits as their host stars evolve. Most planets currently orbiting within 2 AU of their star are expected to be ingested by the end of their stars' red giant branch ascent. Our models confirm that many transiting planets are sufficiently close to their parent star that they will be accreted during the main sequence lifetime of the star. We also find that planet accretion may play an important role in explaining the mysterious red giant rapid rotators, although appropriate planetary systems do not seem to be plentiful enough to account for all such rapid rotators. We compare our modeled rapid rotators and surviving planetary systems to their real-life counterparts and discuss the implications of this work to the broader field of exoplanets.

  19. Network Analyses in Plant Pathogens.

    PubMed

    Botero, David; Alvarado, Camilo; Bernal, Adriana; Danies, Giovanna; Restrepo, Silvia

    2018-01-01

    Even in the age of big data in Biology, studying the connections between the biological processes and the molecular mechanisms behind them is a challenging task. Systems biology arose as a transversal discipline between biology, chemistry, computer science, mathematics, and physics to facilitate the elucidation of such connections. A scenario, where the application of systems biology constitutes a very powerful tool, is the study of interactions between hosts and pathogens using network approaches. Interactions between pathogenic bacteria and their hosts, both in agricultural and human health contexts are of great interest to researchers worldwide. Large amounts of data have been generated in the last few years within this area of research. However, studies have been relatively limited to simple interactions. This has left great amounts of data that remain to be utilized. Here, we review the main techniques in network analysis and their complementary experimental assays used to investigate bacterial-plant interactions. Other host-pathogen interactions are presented in those cases where few or no examples of plant pathogens exist. Furthermore, we present key results that have been obtained with these techniques and how these can help in the design of new strategies to control bacterial pathogens. The review comprises metabolic simulation, protein-protein interactions, regulatory control of gene expression, host-pathogen modeling, and genome evolution in bacteria. The aim of this review is to offer scientists working on plant-pathogen interactions basic concepts around network biology, as well as an array of techniques that will be useful for a better and more complete interpretation of their data.

  20. High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System

    PubMed Central

    Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram

    2014-01-01

    We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second. PMID:24891848

  1. High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System.

    PubMed

    Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram

    2014-01-01

    We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second.

  2. Deployment and Operational Experiences with CernVM-FS at the GridKa Tier-1 Center

    NASA Astrophysics Data System (ADS)

    Alef, Manfred; Jäger, Axel; Petzold and, Andreas; Verstege, Bernhard

    2012-12-01

    In 2012 the GridKa Tier-1 computing center hosts 130 kHS06 computing resources and 14PB disk and 17PB tape space. These resources are shared between the four LHC VOs and a number of national and international VOs from high energy physics and other sciences. CernVM-FS has been deployed at GridKa to supplement the existing NFS-based system to access VO software on the worker nodes. It provides a solution tailored to the requirement of the LHC VOs. We will focus on the first operational experiences and the monitoring of CernVM-FS on the worker nodes and the squid caches.

  3. High Performance Computing Meets Energy Efficiency - Continuum Magazine |

    Science.gov Websites

    NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data

  4. Infrastructures for Distributed Computing: the case of BESIII

    NASA Astrophysics Data System (ADS)

    Pellegrino, J.

    2018-05-01

    The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.

  5. Complex dynamics induced by strong confinement - From tracer diffusion in strongly heterogeneous media to glassy relaxation of dense fluids in narrow slits

    NASA Astrophysics Data System (ADS)

    Mandal, Suvendu; Spanner-Denzer, Markus; Leitmann, Sebastian; Franosch, Thomas

    2017-08-01

    We provide an overview of recent advances of the complex dynamics of particles in strong confinements. The first paradigm is the Lorentz model where tracers explore a quenched disordered host structure. Such systems naturally occur as limiting cases of binary glass-forming systems if the dynamics of one component is much faster than the other. For a certain critical density of the host structure the tracers undergo a localization transition which constitutes a critical phenomenon. A series of predictions in the vicinity of the transition have been elaborated and tested versus computer simulations. Analytical progress is achieved for small obstacle densities. The second paradigm is a dense strongly interacting liquid confined to a narrow slab. Then the glass transition depends nonmonotonically on the separation of the plates due to an interplay of local packing and layering. Very small slab widths allow to address certain features of the statics and dynamics analytically.

  6. Selective Automatic Fire Extinguisher for Computers (SAFECOMP). Developmental Test and Evaluation/Initial Operational Test and Evaluation

    DTIC Science & Technology

    1990-01-01

    ininsj^>ji-f\\Jinoj(vjM o ro -»T co m o •- IO M a m»*NO>M>fNNininNOmso(\\iOfininO’Om<-<if(MO- jio o^MONroN«-o»-Of-|ioinf-N>tO’-m>iin...will present interesting challanges for the SAFECOMP system. The Powell site remote location from Malmstrom AFB (Host Support Base) requires the

  7. Optical Laser Technology and Its Application to Defense Manpower Data Center’s (DMDC) Querry Facsimile (QFAX) Data Base System

    DTIC Science & Technology

    1989-03-01

    a single record may be appended, updated and deleted , and may be accessed serially, by record number within a file or by index value. f. Host Computer...131144 los1 9:S6 7354 1904 4914 4M7 177 Ills (W.2 19447 W99 11769 14116 1471 15972 14252 1720 136 4810 $76 4ඕ 447 to IV 4107 i492 is 16022 1622 0

  8. Protecting Information: The Role of Community Colleges in Cybersecurity Education. A Report from a Workshop Sponsored by the National Science Foundation and the American Association of Community Colleges (Washington, DC, June 26-28, 2002).

    ERIC Educational Resources Information Center

    American Association of Community Colleges, Washington, DC.

    The education and training of the cybersecurity workforce is an essential element in protecting the nation's computer and information systems. On June 26-28, 2002, the National Science Foundation supported a cybersecurity education workshop hosted by the American Association of Community Colleges. The goals of the workshop were to map out the role…

  9. Spectral Graph Theory Analysis of Software-Defined Networks to Improve Performance and Security

    DTIC Science & Technology

    2015-09-01

    listed with its associated IP address. 3. Hardware Components The hardware in the test bed included HP switches and Raspberry Pis . Two types of...discernible difference between the two types. The hosts in the network are Raspberry Pis [58], which are small, inexpensive computers with 10/100... Pis ran one of four operating systems: Raspbian, ArchLinux, Kali, 85 and Windows 10. All of the Raspberry Pis were configured with Iperf [59

  10. Implementation of GAMMON - An efficient load balancing strategy for a local computer system

    NASA Technical Reports Server (NTRS)

    Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.

    1989-01-01

    GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.

  11. Frame Decoder for Consultative Committee for Space Data Systems (CCSDS)

    NASA Technical Reports Server (NTRS)

    Reyes, Miguel A. De Jesus

    2014-01-01

    GNU Radio is a free and open source development toolkit that provides signal processing to implement software radios. It can be used with low-cost external RF hardware to create software defined radios, or without hardware in a simulation-like environment. GNU Radio applications are primarily written in Python and C++. The Universal Software Radio Peripheral (USRP) is a computer-hosted software radio designed by Ettus Research. The USRP connects to a host computer via high-speed Gigabit Ethernet. Using the open source Universal Hardware Driver (UHD), we can run GNU Radio applications using the USRP. An SDR is a "radio in which some or all physical layer functions are software defined"(IEEE Definition). A radio is any kind of device that wirelessly transmits or receives radio frequency (RF) signals in the radio frequency. An SDR is a radio communication system where components that have been typically implemented in hardware are implemented in software. GNU Radio has a generic packet decoder block that is not optimized for CCSDS frames. Using this generic packet decoder will add bytes to the CCSDS frames and will not permit for bit error correction using Reed-Solomon. The CCSDS frames consist of 256 bytes, including a 32-bit sync marker (0x1ACFFC1D). This frames are generated by the Space Data Processor and GNU Radio will perform the modulation and framing operations, including frame synchronization.

  12. The feasibility of mobile computing for on-site inspection.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horak, Karl Emanuel; DeLand, Sharon Marie; Blair, Dianna Sue

    With over 5 billion cellphones in a world of 7 billion inhabitants, mobile phones are the most quickly adopted consumer technology in the history of the world. Miniaturized, power-efficient sensors, especially video-capable cameras, are becoming extremely widespread, especially when one factors in wearable technology like Apples Pebble, GoPro video systems, Google Glass, and lifeloggers. Tablet computers are becoming more common, lighter weight, and power-efficient. In this report the authors explore recent developments in mobile computing and their potential application to on-site inspection for arms control verification and treaty compliance determination. We examine how such technology can effectively be applied tomore » current and potential future inspection regimes. Use cases are given for both host-escort and inspection teams. The results of field trials and their implications for on-site inspections are discussed.« less

  13. Rio: a dynamic self-healing services architecture using Jini networking technology

    NASA Astrophysics Data System (ADS)

    Clarke, James B.

    2002-06-01

    Current mainstream distributed Java architectures offer great capabilities embracing conventional enterprise architecture patterns and designs. These traditional systems provide robust transaction oriented environments that are in large part focused on data and host processors. Typically, these implementations require that an entire application be deployed on every machine that will be used as a compute resource. In order for this to happen, the application is usually taken down, installed and started with all systems in-sync and knowing about each other. Static environments such as these present an extremely difficult environment to setup, deploy and administer.

  14. NASTRAN migration to UNIX

    NASA Technical Reports Server (NTRS)

    Chan, Gordon C.; Turner, Horace Q.

    1990-01-01

    COSMIC/NASTRAN, as it is supported and maintained by COSMIC, runs on four main-frame computers - CDC, VAX, IBM and UNIVAC. COSMIC/NASTRAN on other computers, such as CRAY, AMDAHL, PRIME, CONVEX, etc., is available commercially from a number of third party organizations. All these computers, with their own one-of-a-kind operating systems, make NASTRAN machine dependent. The job control language (JCL), the file management, and the program execution procedure of these computers are vastly different, although 95 percent of NASTRAN source code was written in standard ANSI FORTRAN 77. The advantage of the UNIX operating system is that it has no machine boundary. UNIX is becoming widely used in many workstations, mini's, super-PC's, and even some main-frame computers. NASTRAN for the UNIX operating system is definitely the way to go in the future, and makes NASTRAN available to a host of computers, big and small. Since 1985, many NASTRAN improvements and enhancements were made to conform to the ANSI FORTRAN 77 standards. A major UNIX migration effort was incorporated into COSMIC NASTRAN 1990 release. As a pioneer work for the UNIX environment, a version of COSMIC 89 NASTRAN was officially released in October 1989 for DEC ULTRIX VAXstation 3100 (with VMS extensions). A COSMIC 90 NASTRAN version for DEC ULTRIX DECstation 3100 (with RISC) is planned for April 1990 release. Both workstations are UNIX based computers. The COSMIC 90 NASTRAN will be made available on a TK50 tape for the DEC ULTRIX workstations. Previously in 1988, an 88 NASTRAN version was tested successfully on a SiliconGraphics workstation.

  15. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  16. Modeling the Regulatory Mechanisms by Which NLRX1 Modulates Innate Immune Responses to Helicobacter pylori Infection

    PubMed Central

    Philipson, Casandra W.; Bassaganya-Riera, Josep; Viladomiu, Monica; Kronsteiner, Barbara; Abedi, Vida; Hoops, Stefan; Michalak, Pawel; Kang, Lin; Girardin, Stephen E.; Hontecillas, Raquel

    2015-01-01

    Helicobacter pylori colonizes half of the world’s population as the dominant member of the gastric microbiota resulting in a lifelong chronic infection. Host responses toward the bacterium can result in asymptomatic, pathogenic or even favorable health outcomes; however, mechanisms underlying the dual role of H. pylori as a commensal versus pathogenic organism are not well characterized. Recent evidence suggests mononuclear phagocytes are largely involved in shaping dominant immunity during infection mediating the balance between host tolerance and succumbing to overt disease. We combined computational modeling, bioinformatics and experimental validation in order to investigate interactions between macrophages and intracellular H. pylori. Global transcriptomic analysis on bone marrow-derived macrophages (BMDM) in a gentamycin protection assay at six time points unveiled the presence of three sequential host response waves: an early transient regulatory gene module followed by sustained and late effector responses. Kinetic behaviors of pattern recognition receptors (PRRs) are linked to differential expression of spatiotemporal response waves and function to induce effector immunity through extracellular and intracellular detection of H. pylori. We report that bacterial interaction with the host intracellular environment caused significant suppression of regulatory NLRC3 and NLRX1 in a pattern inverse to early regulatory responses. To further delineate complex immune responses and pathway crosstalk between effector and regulatory PRRs, we built a computational model calibrated using time-series RNAseq data. Our validated computational hypotheses are that: 1) NLRX1 expression regulates bacterial burden in macrophages; and 2) early host response cytokines down-regulate NLRX1 expression through a negative feedback circuit. This paper applies modeling approaches to characterize the regulatory role of NLRX1 in mechanisms of host tolerance employed by macrophages to respond to and/or to co-exist with intracellular H. pylori. PMID:26367386

  17. Using diffusion k-means for simple stellar population modeling of low S/N quasar host galaxy spectra

    NASA Astrophysics Data System (ADS)

    Mosby, Gregory; Tremonti, Christina A.; Hooper, Eric; Wolf, Marsha J.; Sheinis, Andrew; Richards, Joseph

    2016-01-01

    Quasar host galaxies (QHGs) represent a unique stage in galaxy evolution that can provide a glimpse into the relationship between an active supermassive black hole (SMBH) and its host galaxy. However, observing the hosts of high luminosity, unobscured quasars in the optical is complicated by the large ratio of quasar to host galaxy light. One strategy in optical spectroscopy is to use offset longslit observations of the host galaxy. This method allows the centers of QHGs to be analyzed apart from other regions of their host galaxies. But light from the accreting black hole's point spread function still enters the host galaxy observations, and where the contrast between the host and intervening quasar light is favorable, the host galaxy is faint, producing low signal-to-noise (S/N) data. This stymies traditional stellar population methods that might rely on high S/N features in galaxy spectra to recover key galaxy properties like its star formation history (SFH). In response to this challenge, we have developed a method of stellar population modeling using diffusion k-means (DFK) that can recover SFHs from rest frame optical data with S/N ~ 5 Å^-1. Specifically, we use DFK to cultivate a reduced stellar population basis set. This DFK basis set of four broad age bins is able to recover a range of SFHs. With an analytic description of the seeing, we can use this DFK basis set to simultaneously model the SFHs and the intervening quasar light of QHGs as well. We compare the results of this method with previous techniques using synthetic data and find that our new method has a clear advantage in recovering SFHs from QHGs. On average, the DFK basis set is just as accurate and decisively more precise. This new technique could be used to analyze other low S/N galaxy spectra like those from higher redshift or integral field spectroscopy surveys.This material is based upon work supported by the National Science Foundation under grant no. DGE -0718123 and the Advanced Opportunity fellowship program at the University of Wisconsin-Madison. This research was performed using the computer resources and assistance of the UW-Madison Center For High Throughput Computing (CHTC) in the Department of Computer Sciences.

  18. Multiphase groundwater flow near cooling plutons

    USGS Publications Warehouse

    Hayba, D.O.; Ingebritsen, S.E.

    1997-01-01

    We investigate groundwater flow near cooling plutons with a computer program that can model multiphase flow, temperatures up to 1200??C, thermal pressurization, and temperature-dependent rock properties. A series of experiments examines the effects of host-rock permeability, size and depth of pluton emplacement, single versus multiple intrusions, the influence of a caprock, and the impact of topographically driven groundwater flow. We also reproduce and evaluate some of the pioneering numerical experiments on flow around plutons. Host-rock permeability is the principal factor influencing fluid circulation and heat transfer in hydrothermal systems. The hottest and most steam-rich systems develop where permeability is of the order of 10-15 m2. Temperatures and life spans of systems decrease with increasing permeability. Conduction-dominated systems, in which permeabilities are ???10-16m2, persist longer but exhibit relatively modest increases in near-surface temperatures relative to ambient conditions. Pluton size, emplacement depth, and initial thermal conditions have less influence on hydrothermal circulation patterns but affect the extent of boiling and duration of hydrothermal systems. Topographically driven groundwater flow can significantly alter hydrothermal circulation; however, a low-permeability caprock effectively decouples the topographically and density-driven systems and stabilizes the mixing interface between them thereby defining a likely ore-forming environment.

  19. Vertical-angle control system in the LLMC

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Yang, Lei; Tie, Qiongxian; Mao, Wei

    2000-10-01

    A control system of the vertical angle transmission used in the Lower Latitude Meridian Circle (LLMC) is described in this paper. The transmission system can change the zenith distance of the tube quickly and precisely. It works in three modes: fast motion, slow motion and lock mode. The fast motion mode and the slow motion mode are that the tube of the instrument is driven by a fast motion stepper motor and a slow motion one separately. The lock mode is running for lock mechanism that is driven by a lock stepper motor. These three motors are controlled together by a single chip microcontroller, which is controlled in turn by a host personal computer. The slow motion mechanism and its rotational step angle are fully discussed because the mechanism is not used before. Then the hardware structure of this control system based on a microcontroller is described. Control process of the system is introduced during a normal observation, which is divided into eleven steps. All the steps are programmed in our control software in C++ and/or in ASM. The C++ control program is set up in the host PC, while the ASM control program is in the microcontroller system. Structures and functions of these rprograms are presented. Some details and skills for programming are discussed in the paper too.

  20. Computer network environment planning and analysis

    NASA Technical Reports Server (NTRS)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  1. The Hico Image Processing System: A Web-Accessible Hyperspectral Remote Sensing Toolbox

    NASA Astrophysics Data System (ADS)

    Harris, A. T., III; Goodman, J.; Justice, B.

    2014-12-01

    As the quantity of Earth-observation data increases, the use-case for hosting analytical tools in geospatial data centers becomes increasingly attractive. To address this need, HySpeed Computing and Exelis VIS have developed the HICO Image Processing System, a prototype cloud computing system that provides online, on-demand, scalable remote sensing image processing capabilities. The system provides a mechanism for delivering sophisticated image processing analytics and data visualization tools into the hands of a global user community, who will only need a browser and internet connection to perform analysis. Functionality of the HICO Image Processing System is demonstrated using imagery from the Hyperspectral Imager for the Coastal Ocean (HICO), an imaging spectrometer located on the International Space Station (ISS) that is optimized for acquisition of aquatic targets. Example applications include a collection of coastal remote sensing algorithms that are directed at deriving critical information on water and habitat characteristics of our vulnerable coastal environment. The project leverages the ENVI Services Engine as the framework for all image processing tasks, and can readily accommodate the rapid integration of new algorithms, datasets and processing tools.

  2. A structural equation modeling approach for the adoption of cloud computing to enhance the Malaysian healthcare sector.

    PubMed

    Ratnam, Kalai Anand; Dominic, P D D; Ramayah, T

    2014-08-01

    The investments and costs of infrastructure, communication, medical-related equipments, and software within the global healthcare ecosystem portray a rather significant increase. The emergence of this proliferation is then expected to grow. As a result, information and cross-system communication became challenging due to the detached independent systems and subsystems which are not connected. The overall model fit expending over a sample size of 320 were tested with structural equation modelling (SEM) using AMOS 20.0 as the modelling tool. SPSS 20.0 is used to analyse the descriptive statistics and dimension reliability. Results of the study show that system utilisation and system impact dimension influences the overall level of services of the healthcare providers. In addition to that, the findings also suggest that systems integration and security plays a pivotal role for IT resources in healthcare organisations. Through this study, a basis for investigation on the need to improvise the Malaysian healthcare ecosystem and the introduction of a cloud computing platform to host the national healthcare information exchange has been successfully established.

  3. Virtualizing access to scientific applications with the Application Hosting Environment

    NASA Astrophysics Data System (ADS)

    Zasada, S. J.; Coveney, P. V.

    2009-12-01

    The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.

  4. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  5. Effective correlator for RadioAstron project

    NASA Astrophysics Data System (ADS)

    Sergeev, Sergey

    This paper presents the implementation of programme FX-correlator for Very Long Baseline Interferometry, adapted for the project "RadioAstron". Software correlator implemented for heterogeneous computing systems using graphics accelerators. It is shown that for the task interferometry implementation of the graphics hardware has a high efficiency. The host processor of heterogeneous computing system, performs the function of forming the data flow for graphics accelerators, the number of which corresponds to the number of frequency channels. So, for the Radioastron project, such channels is seven. Each accelerator is perform correlation matrix for all bases for a single frequency channel. Initial data is converted to the floating-point format, is correction for the corresponding delay function and computes the entire correlation matrix simultaneously. Calculation of the correlation matrix is performed using the sliding Fourier transform. Thus, thanks to the compliance of a solved problem for architecture graphics accelerators, managed to get a performance for one processor platform Kepler, which corresponds to the performance of this task, the computing cluster platforms Intel on four nodes. This task successfully scaled not only on a large number of graphics accelerators, but also on a large number of nodes with multiple accelerators.

  6. Dynamic computer model for the metallogenesis and tectonics of the Circum-North Pacific

    USGS Publications Warehouse

    Scotese, Christopher R.; Nokleberg, Warren J.; Monger, James W.H.; Norton, Ian O.; Parfenov, Leonid M.; Khanchuk, Alexander I.; Bundtzen, Thomas K.; Dawson, Kenneth M.; Eremin, Roman A.; Frolov, Yuri F.; Fujita, Kazuya; Goryachev, Nikolai A.; Pozdeev, Anany I.; Ratkin, Vladimir V.; Rodinov, Sergey M.; Rozenblum, Ilya S.; Scholl, David W.; Shpikerman, Vladimir I.; Sidorov, Anatoly A.; Stone, David B.

    2001-01-01

    The digital files on this report consist of a dynamic computer model of the metallogenesis and tectonics of the Circum-North Pacific, and background articles, figures, and maps. The tectonic part of the dynamic computer model is derived from a major analysis of the tectonic evolution of the Circum-North Pacific which is also contained in directory tectevol. The dynamic computer model and associated materials on this CD-ROM are part of a project on the major mineral deposits, metallogenesis, and tectonics of the Russian Far East, Alaska, and the Canadian Cordillera. The project provides critical information on bedrock geology and geophysics, tectonics, major metalliferous mineral resources, metallogenic patterns, and crustal origin and evolution of mineralizing systems for this region. The major scientific goals and benefits of the project are to: (1) provide a comprehensive international data base on the mineral resources of the region that is the first, extensive knowledge available in English; (2) provide major new interpretations of the origin and crustal evolution of mineralizing systems and their host rocks, thereby enabling enhanced, broad-scale tectonic reconstructions and interpretations; and (3) promote trade and scientific and technical exchanges between North America and Eastern Asia.

  7. AVTC Hosts TechnoCamp

    ERIC Educational Resources Information Center

    Miner, Brenda

    2006-01-01

    The Area Vo-Tech Center (AVTC) in Russellville, Arkansas, recently hosted its first TechnoCamp to encourage enrollment based on the aptitude and interest level of the students enrolling in the various programs. The center currently offers student enrollment in auto technology, computer engineering, cosmetology, construction technology, drafting…

  8. Extinction by a Homogeneous Spherical Particle in an Absorbing Medium

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Videen, Gorden; Yang, Ping

    2017-01-01

    We use a recent computer implementation of the first principles theory of electromagnetic scattering to compute far-field extinction by a spherical particle embedded in an absorbing unbounded host. Our results show that the suppressing effect of increasing absorption inside the host medium on the ripple structure of the extinction efficiency factor as a function of the size parameter is similar to the well-known effect of increasing absorption inside a particle embedded in a nonabsorbing host. However, the accompanying effects on the interference structure of the extinction efficiency curves are diametrically opposite. As a result, sufficiently large absorption inside the host medium can cause negative particulate extinction. We offer a simple physical explanation of the phenomenon of negative extinction consistent with the interpretation of the interference structure as being the result of interference of the field transmitted by the particle and the diffracted field due to an incomplete wave front resulting from the blockage of the incident plane wave by the particle's geometrical projection.

  9. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  10. Cloud flexibility using DIRAC interware

    NASA Astrophysics Data System (ADS)

    Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo

    2014-06-01

    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.

  11. A combined calorimetric and computational study of the energetics of rare earth substituted UO 2 systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lei; Solomon, Jonathan M.; Asta, Mark

    2015-09-01

    The energetics of rare earth substituted UO2 solid solutions (U1-xLnxO2-0.5x+y, where Ln = La, Y, and Nd) are investigated employing a combination of calorimetric measurements and density functional theory based computations. Calculated and measured formation enthalpies agree within 10 kJ/mol for stoichiometric oxygen/metal compositions. To better understand the factors governing the stability and defect binding in rare earth substituted urania solid solutions, systematic trends in the energetics are investigated based on the present results and previous computational and experimental thermochemical studies of rare earth substituted fluorite oxides (A1-xLnxO2-0.5x, where A = Hf, Zr, Ce, and Th). A consistent trend towardsmore » increased energetic stability with larger size mismatch between the smaller host tetravalent cation and the larger rare earth trivalent cation is found for both actinide and non-actinide fluorite oxide systems where aliovalent substitution of Ln cations is compensated by oxygen vacancies. However, the large exothermic oxidation enthalpy in the UO2 based systems favors oxygen rich compositions where charge compensation occurs through the formation of uranium cations with higher oxidation states.« less

  12. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Horey, James L; Begoli, Edmon

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less

  13. An attempt at the computer-aided management of HIV infection

    NASA Astrophysics Data System (ADS)

    Ida, A.; Oharu, Y.; Sankey, O.

    2007-07-01

    The immune system is a complex and diverse system in the human body and HIV virus disrupts and destroys it through extremely complicated but surprisingly logical process. The purpose of this paper is to make an attempt to present a method for the computer-aided management of HIV infection process by means of a mathematical model describing the dynamics of the host pathogen interaction with HIV-1. Treatments for the AIDS disease must be changed to more efficient ones in accordance with the disease progression and the status of the immune system. The level of progression and the status are represented by parameters which are governed by our mathematical model. It is then exhibited that our model is numerically stable and uniquely solvable. With this knowledge, our mathematical model for HIV disease progression is formulated and physiological interpretations are provided. The results of our numerical simulations are visualized, and it is seen that our results agree with medical aspects from the point of view of antiretroviral therapy. It is then expected that our approach will take to address practical clinical issues and will be applied to the computer-aided management of antiretroviral therapies.

  14. Contact dynamics math model

    NASA Technical Reports Server (NTRS)

    Glaese, John R.; Tobbe, Patrick A.

    1986-01-01

    The Space Station Mechanism Test Bed consists of a hydraulically driven, computer controlled six degree of freedom (DOF) motion system with which docking, berthing, and other mechanisms can be evaluated. Measured contact forces and moments are provided to the simulation host computer to enable representation of orbital contact dynamics. This report describes the development of a generalized math model which represents the relative motion between two rigid orbiting vehicles. The model allows motion in six DOF for each body, with no vehicle size limitation. The rotational and translational equations of motion are derived. The method used to transform the forces and moments from the sensor location to the vehicles' centers of mass is also explained. Two math models of docking mechanisms, a simple translational spring and the Remote Manipulator System end effector, are presented along with simulation results. The translational spring model is used in an attempt to verify the simulation with compensated hardware in the loop results.

  15. Naver: a PC-cluster-based VR system

    NASA Astrophysics Data System (ADS)

    Park, ChangHoon; Ko, HeeDong; Kim, TaiYun

    2003-04-01

    In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.

  16. Quantum computing with defects.

    PubMed

    Weber, J R; Koehl, W F; Varley, J B; Janotti, A; Buckley, B B; Van de Walle, C G; Awschalom, D D

    2010-05-11

    Identifying and designing physical systems for use as qubits, the basic units of quantum information, are critical steps in the development of a quantum computer. Among the possibilities in the solid state, a defect in diamond known as the nitrogen-vacancy (NV(-1)) center stands out for its robustness--its quantum state can be initialized, manipulated, and measured with high fidelity at room temperature. Here we describe how to systematically identify other deep center defects with similar quantum-mechanical properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate defect systems. To illustrate these points in detail, we compare electronic structure calculations of the NV(-1) center in diamond with those of several deep centers in 4H silicon carbide (SiC). We then discuss the proposed criteria for similar defects in other tetrahedrally coordinated semiconductors.

  17. Using PVM to host CLIPS in distributed environments

    NASA Technical Reports Server (NTRS)

    Myers, Leonard; Pohl, Kym

    1994-01-01

    It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.

  18. Toward tailoring Majorana bound states in artificially constructed magnetic atom chains on elemental superconductors

    PubMed Central

    Thorwart, Michael

    2018-01-01

    Realizing Majorana bound states (MBS) in condensed matter systems is a key challenge on the way toward topological quantum computing. As a promising platform, one-dimensional magnetic chains on conventional superconductors were theoretically predicted to host MBS at the chain ends. We demonstrate a novel approach to design of model-type atomic-scale systems for studying MBS using single-atom manipulation techniques. Our artificially constructed atomic Fe chains on a Re surface exhibit spin spiral states and a remarkable enhancement of the local density of states at zero energy being strongly localized at the chain ends. Moreover, the zero-energy modes at the chain ends are shown to emerge and become stabilized with increasing chain length. Tight-binding model calculations based on parameters obtained from ab initio calculations corroborate that the system resides in the topological phase. Our work opens new pathways to design MBS in atomic-scale hybrid structures as a basis for fault-tolerant topological quantum computing. PMID:29756034

  19. Toward tailoring Majorana bound states in artificially constructed magnetic atom chains on elemental superconductors.

    PubMed

    Kim, Howon; Palacio-Morales, Alexandra; Posske, Thore; Rózsa, Levente; Palotás, Krisztián; Szunyogh, László; Thorwart, Michael; Wiesendanger, Roland

    2018-05-01

    Realizing Majorana bound states (MBS) in condensed matter systems is a key challenge on the way toward topological quantum computing. As a promising platform, one-dimensional magnetic chains on conventional superconductors were theoretically predicted to host MBS at the chain ends. We demonstrate a novel approach to design of model-type atomic-scale systems for studying MBS using single-atom manipulation techniques. Our artificially constructed atomic Fe chains on a Re surface exhibit spin spiral states and a remarkable enhancement of the local density of states at zero energy being strongly localized at the chain ends. Moreover, the zero-energy modes at the chain ends are shown to emerge and become stabilized with increasing chain length. Tight-binding model calculations based on parameters obtained from ab initio calculations corroborate that the system resides in the topological phase. Our work opens new pathways to design MBS in atomic-scale hybrid structures as a basis for fault-tolerant topological quantum computing.

  20. FPGA-based real-time swept-source OCT systems for B-scan live-streaming or volumetric imaging

    NASA Astrophysics Data System (ADS)

    Bandi, Vinzenz; Goette, Josef; Jacomet, Marcel; von Niederhäusern, Tim; Bachmann, Adrian H.; Duelk, Marcus

    2013-03-01

    We have developed a Swept-Source Optical Coherence Tomography (Ss-OCT) system with high-speed, real-time signal processing on a commercially available Data-Acquisition (DAQ) board with a Field-Programmable Gate Array (FPGA). The Ss-OCT system simultaneously acquires OCT and k-clock reference signals at 500MS/s. From the k-clock signal of each A-scan we extract a remap vector for the k-space linearization of the OCT signal. The linear but oversampled interpolation is followed by a 2048-point FFT, additional auxiliary computations, and a data transfer to a host computer for real-time, live-streaming of B-scan or volumetric C-scan OCT visualization. We achieve a 100 kHz A-scan rate by parallelization of our hardware algorithms, which run on standard and affordable, commercially available DAQ boards. Our main development tool for signal analysis as well as for hardware synthesis is MATLAB® with add-on toolboxes and 3rd-party tools.

  1. Image authentication by means of fragile CGH watermarking

    NASA Astrophysics Data System (ADS)

    Schirripa Spagnolo, Giuseppe; Simonetti, Carla; Cozzella, Lorenzo

    2005-09-01

    In this paper we propose a fragile marking system based on Computer Generated Hologram coding techniques, which is able to detect malicious tampering while tolerating some incidental distortions. A fragile watermark is a mark that is readily altered or destroyed when the host image is modified through a linear or nonlinear transformation. A fragile watermark monitors the integrity of the content of the image but not its numerical representation. Therefore the watermark is designed so that the integrity is proven if the content of the image has not been tampered. Since digital images can be altered or manipulated with ease, the ability to detect changes to digital images is very important for many applications such as news reporting, medical archiving, or legal usages. The proposed technique could be applied to Color Images as well as to Gray Scale ones. Using Computer Generated Hologram watermarking, the embedded mark could be easily recovered by means of a Fourier Transform. Due to this fact host image can be tampered and watermarked with the same holographic pattern. To avoid this possibility we have introduced an encryption method using a asymmetric Cryptography. The proposed schema is based on the knowledge of original mark from the Authentication

  2. Strategies, Challenges and Prospects for Active Learning in the Computer-Based Classroom

    ERIC Educational Resources Information Center

    Holbert, K. E.; Karady, G. G.

    2009-01-01

    The introduction of computer-equipped classrooms into engineering education has brought with it a host of opportunities and issues. Herein, some of the challenges and successes for creating an environment for active learning within computer-based classrooms are described. The particular teaching approach developed for undergraduate electrical…

  3. KSC-2012-6402

    NASA Image and Video Library

    2012-11-16

    CAPE CANAVERAL, Fla. – Firing Room 1, also known as the Young-Crippen Firing Room, has been outfitted with computer, communications and networking systems to host rockets and spacecraft that are currently under development. The firing room is where the launch of rockets and spacecraft are controlled at NASA's Kennedy Space Center in Florida. Flight controllers also monitor processing and preparations of launch vehicles from the firing room. There are four firing rooms inside the Launch Control Center at Kennedy. Photo credit: NASA/Dmitri Gerondidakis

  4. KSC-2012-6401

    NASA Image and Video Library

    2012-11-16

    CAPE CANAVERAL, Fla. – Firing Room 1, also known as the Young-Crippen Firing Room, has been outfitted with computer, communications and networking systems to host rockets and spacecraft that are currently under development. The firing room is where the launch of rockets and spacecraft are controlled at NASA's Kennedy Space Center in Florida. Flight controllers also monitor processing and preparations of launch vehicles from the firing room. There are four firing rooms inside the Launch Control Center at Kennedy. Photo credit: NASA/Dmitri Gerondidakis

  5. An Internet of Things Approach to Electrical Power Monitoring and Outage Reporting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Daniel B

    The so-called Internet of Things concept has captured much attention recently as ordinary devices are connected to the Internet for monitoring and control purposes. One enabling technology is the proliferation of low-cost, single board computers with built-in network interfaces. Some of these are capable of hosting full-fledged operating systems that provide rich programming environments. Taken together, these features enable inexpensive solutions for even traditional tasks such as the one presented here for electrical power monitoring and outage reporting.

  6. Catalog of Resources for Education in Ada (Trade Name) and Software Engineering (CREASE). Version 4.0.

    DTIC Science & Technology

    1986-05-01

    offering the course is a company. Name and Address of offeror: Tachyon Corporation 2725 Congress Street Suite 2H San Diego, CA 92110 Offeror’s...Background: Tachyon Corporation specializes in Ada software quality assurance, computer hosted instruction and information retrieval systems, authoring tools...easy to use (on-line help) and can look up or search for terms. Tachyon Corporation 20 CDURSE OFFERINGS 2.2. Lecture/Seminar Courses 2.2.1. Company

  7. Ada Compiler Validation Summary Report: Certificate Number: 901129W1. 11096 Verdix Corporation, VADS Sequent Balance DYNIX 3.0, VAda-110-2323, Version 6.0, Sequent Balance 8000, DYNIX Version 3.0 (Host & Target).

    DTIC Science & Technology

    1991-08-01

    90-09-25- VRX Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 901129W1.11096 Verdix Corporation VADS Sequent Balance DYNIX 3.0, VAda-1lO...8000, DYNIX Version 3.0 Target Computer System: Sequent Balance 8000, DYNIX Version 3.0 Customer Agreement Number: 90-09-25- VRX See Section 3.1 for any

  8. Program For A Pushbutton Display

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Luck, William S., Jr.

    1989-01-01

    Programmable Display Pushbutton (PDP) is pushbutton device available from Micro Switch having programmable 16X35 matrix of light-emitting diodes on pushbutton surface. Any desired legends display on PDP's, producing user-friendly applications reducing need for dedicated manual controls. Interacts with operator, calls for correct response before transmitting next message. Both simple manual control and sophisticated programmable link between operator and host system. Programmable Display Pushbutton Legend Editor (PDPE) computer program used to create light-emitting-diode (LED) displays for pushbuttons. Written in FORTRAN.

  9. HAL/S - The programming language for Shuttle

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1974-01-01

    HAL/S is a higher order language and system, now operational, adopted by NASA for programming Space Shuttle on-board software. Program reliability is enhanced through language clarity and readability, modularity through program structure, and protection of code and data. Salient features of HAL/S include output orientation, automatic checking (with strictly enforced compiler rules), the availability of linear algebra, real-time control, a statement-level simulator, and compiler transferability (for applying HAL/S to additional object and host computers). The compiler is described briefly.

  10. PIV Data Validation Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  11. A data base processor semantics specification package

    NASA Technical Reports Server (NTRS)

    Fishwick, P. A.

    1983-01-01

    A Semantics Specification Package (DBPSSP) for the Intel Data Base Processor (DBP) is defined. DBPSSP serves as a collection of cross assembly tools that allow the analyst to assemble request blocks on the host computer for passage to the DBP. The assembly tools discussed in this report may be effectively used in conjunction with a DBP compatible data communications protocol to form a query processor, precompiler, or file management system for the database processor. The source modules representing the components of DBPSSP are fully commented and included.

  12. Covariance of lucky images: performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2017-01-01

    The covariance of ground-based lucky images is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper, we analyse the relevance of the number of processed frames, the frames' quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer-simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Junghyun; Gangwon, Jo; Jaehoon, Jung

    Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined inmore » a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.« less

  14. Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission

    NASA Technical Reports Server (NTRS)

    Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan

    2010-01-01

    The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.

  15. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  16. A microprocessor-based control system for the Vienna PDS microdensitometer

    NASA Technical Reports Server (NTRS)

    Jenkner, H.; Stoll, M.; Hron, J.

    1984-01-01

    The Motorola Exorset 30 system, based on a Motorola 6809 microprocessor which serves as control processor for the microdensitometer is presented. User communication and instrument control are implemented in this syatem; data transmission to a host computer is provided via standard interfaces. The Vienna PDS system (VIPS) software was developed in BASIC and M6809 assembler. It provides efficient user interaction via function keys and argument input in a menu oriented environment. All parameters can be stored on, and retrieved from, minifloppy disks, making it possible to set up large scanning tasks. Extensive user information includes continuously updated status and coordinate displays, as well as a real time graphic display during scanning.

  17. Conceptual model analysis of interaction at a concrete-Boom Clay interface

    NASA Astrophysics Data System (ADS)

    Liu, Sanheng; Jacques, Diederik; Govaerts, Joan; Wang, Lian

    In many concepts for deep disposal of high-level radioactive waste, cementitious materials are used in the engineered barriers. For example, in Belgium the engineered barrier system is based on a considerable amount of cementitious materials as buffer and backfill in the so-called supercontainer embedded in the hosting geological formation. A potential hosting formation is Boom Clay. Insight in the interaction between the high-pH pore water of the cementitious materials and neutral-pH Boom Clay pore water is required. Two problems are quite common for modeling of such a system. The first one is the computational cost due to the long timescale model assessments envisaged for the deep disposal system. Also a very fine grid (in sub-millimeter), especially at interfaces has to be used in order to accurately predict the evolution of the system. The second one is whether to use equilibrium or kinetic reaction models. The objectives of this paper are twofold. First, we develop an efficient coupled reactive transport code for this diffusion-dominated system by making full use of multi-processors/cores computers. Second, we investigate how sensitive the system is to chemical reaction models especially when pore clogging due to mineral precipitation is considered within the cementitious system. To do this, we selected two portlandite dissolution models, i.e., equilibrium (fastest) and diffusion-controlled model with precipitation of a calcite layer around portlandite particles (diffusion-controlled dissolution). The results show that with shrinking core model portlandite dissolution and calcite precipitation are much slower than with the equilibrium model. Also diffusion-controlled dissolution smooths out dissolution fronts compared to the equilibrium model. However, only a slight difference with respect to the clogging time can be found even though we use a very small diffusion coefficient (10-20 m2/s) in the precipitated calcite layer.

  18. A report of work activities on the NASA Spacelink public electronic library

    NASA Technical Reports Server (NTRS)

    Smith, Willard A.

    1994-01-01

    NASA Spacelink is a comprehensive electronic data base of NASA and other source educational and informational materials. This service originates at Marshall Space Flight Center (MSFC) in Huntsville, Alabama. This is an education service of NASA Headquarters, through the MSFC Education Office, that first began in February of 1988. The new NASA Spacelink Public Electronic Library was the result of a study conducted to investigate an upgrade or redesign of the original NASA Spacelink. The UNIX Operating System was chosen to be the host operating system for the new NASA Spacelink Public Electronic Library. The UNIX system was selected for this project because of the strengths built into the embedded communication system and for its simple and direct file handling capabilities. The host hardware of the new system is a Sun Microsystems SPARCserver 1000 computer system. The configuration has four 50-MHz SuperSPARC processors with 128 megabytes of shared memory; three SB800 serial ports allowing 24 cable links for phone communications; 4.1 gigabytes of on-line disk storage; and ten (10) CD-ROM drives. Communications devices on the system are sufficient to support the expected number of users through the Internet, the local dial services, long distance dial services; the MSFC PABX, and the NPSS (NASA Packet Switching System) and 1-800 access service for the registered teachers.

  19. Optimizing eukaryotic cell hosts for protein production through systems biotechnology and genome-scale modeling.

    PubMed

    Gutierrez, Jahir M; Lewis, Nathan E

    2015-07-01

    Eukaryotic cell lines, including Chinese hamster ovary cells, yeast, and insect cells, are invaluable hosts for the production of many recombinant proteins. With the advent of genomic resources, one can now leverage genome-scale computational modeling of cellular pathways to rationally engineer eukaryotic host cells. Genome-scale models of metabolism include all known biochemical reactions occurring in a specific cell. By describing these mathematically and using tools such as flux balance analysis, the models can simulate cell physiology and provide targets for cell engineering that could lead to enhanced cell viability, titer, and productivity. Here we review examples in which metabolic models in eukaryotic cell cultures have been used to rationally select targets for genetic modification, improve cellular metabolic capabilities, design media supplementation, and interpret high-throughput omics data. As more comprehensive models of metabolism and other cellular processes are developed for eukaryotic cell culture, these will enable further exciting developments in cell line engineering, thus accelerating recombinant protein production and biotechnology in the years to come. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. 7th Annual Systems Biology Symposium: Systems Biology and Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galitski, Timothy P.

    2008-04-01

    Systems biology recognizes the complex multi-scale organization of biological systems, from molecules to ecosystems. The International Symposium on Systems Biology has been hosted by the Institute for Systems Biology in Seattle, Washington, since 2002. The annual two-day event gathers the most influential researchers transforming biology into an integrative discipline investingating complex systems. Engineering and application of new technology is a central element of systems biology. Genome-scale, or very small-scale, biological questions drive the enigneering of new technologies, which enable new modes of experimentation and computational analysis, leading to new biological insights and questions. Concepts and analytical methods in engineering aremore » now finding direct applications in biology. Therefore, the 2008 Symposium, funded in partnership with the Department of Energy, featured global leaders in "Systems Biology and Engineering."« less

  1. TASLIMAGE System #2 Technical Equivalence Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Topper, J. D.; Stone, D. K.

    In early 2017, a second TASLIMAGE system (TASL 2) was procured from Track Analysis Systems, Ltd. The new device is intended to complement the first system (TASL 1) and to provide redundancy to the original system which was acquired in 2009. The new system functions primarily the same as the earlier system, though with different X-Y stage hardware and a USB link from the camera to the host computer, both of which contribute to a reduction in CR-39 foil imaging time. The camera and image analysis software are identical between the two systems. Neutron dose calculations are performed externally andmore » independent of the imaging system used to collect track data, relying only on the measured recoil proton track density per cm 2 for a set of known-dose CR-39 foils processed in each etch.« less

  2. A Scalable, Out-of-Band Diagnostics Architecture for International Space Station Systems Support

    NASA Technical Reports Server (NTRS)

    Fletcher, Daryl P.; Alena, Rick; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The computational infrastructure of the International Space Station (ISS) is a dynamic system that supports multiple vehicle subsystems such as Caution and Warning, Electrical Power Systems and Command and Data Handling (C&DH), as well as scientific payloads of varying size and complexity. The dynamic nature of the ISS configuration coupled with the increased demand for payload support places a significant burden on the inherently resource constrained computational infrastructure of the ISS. Onboard system diagnostics applications are hosted on computers that are elements of the avionics network while ground-based diagnostic applications receive only a subset of available telemetry, down-linked via S-band communications. In this paper we propose a scalable, out-of-band diagnostics architecture for ISS systems support that uses a read-only connection for C&DH data acquisition, which provides a lower cost of deployment and maintenance (versus a higher criticality readwrite connection). The diagnostics processing burden is off-loaded from the avionics network to elements of the on-board LAN that have a lower overall cost of operation and increased computational capacity. A superset of diagnostic data, richer in content than the configured telemetry, is made available to Advanced Diagnostic System (ADS) clients running on wireless handheld devices, affording the crew greater mobility for troubleshooting and providing improved insight into vehicle state. The superset of diagnostic data is made available to the ground in near real-time via an out-of band downlink, providing a high level of fidelity between vehicle state and test, training and operational facilities on the ground.

  3. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  4. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  5. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    NASA Astrophysics Data System (ADS)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user community comes new requests for algorithms and processing capabilities. To address this demand, OT is developing an extensible service based architecture for integrating community-developed software. This "plugable" approach to Web service deployment will enable new processing and analysis tools to run collocated with OT hosted data.

  6. Ada Compiler Validation Summary Report: Certificate Number: 940305W1. 11335 TLD Systems, Ltd. TLD Comanche VAX/i960 Ada Compiler System, Version 4.1.1 VAX Cluster under VMS 5.5 = Tronix JIAWG Execution Vehicle (i960MX) under TLD Real Time Executive, Version 4.1.1

    DTIC Science & Technology

    1994-03-14

    Comanche VAX/i960 Ada Compiler System, Version 4.1.1 Host Computer System: Digital Local Area Network VAX Cluster executing on (2) MicroVAX 3100 Model 90...31 $MAX DIGITS 15 SmNx INT 2147483647 $MAX INT PLUS_1 2147483648 $MIN IN -2_147483648 A-3 MACR PARAMEERIS $NAME NO SUCH INTEGER TYPE $NAME LIST...nested generlcs are Supported and generics defined in libary units are pexitted. zt is not possible to pen ore a macro instantiation for a generic I

  7. Design of cold chain logistics remote monitoring system based on ZigBee and GPS location

    NASA Astrophysics Data System (ADS)

    Zong, Xiaoping; Shao, Heling

    2017-03-01

    This paper designed a remote monitoring system based on Bee Zig wireless sensor network and GPS positioning, according to the characteristics of cold chain logistics. The system consisted of the ZigBee network, gateway and monitoring center. ZigBee network temperature acquisition modules and GPS positioning acquisition module were responsible for data collection, and then send the data to the host computer through the GPRS network and Internet to realize remote monitoring of vehicle with functions of login permissions, temperature display, latitude and longitude display, historical data, real-time alarm and so on. Experiments showed that the system is stable, reliable and effective to realize the real-time remote monitoring of the vehicle in the process of cold chain transport.

  8. VIOLIN: vaccine investigation and online information network.

    PubMed

    Xiang, Zuoshuang; Todd, Thomas; Ku, Kim P; Kovacic, Bethany L; Larson, Charles B; Chen, Fang; Hodges, Andrew P; Tian, Yuying; Olenzek, Elizabeth A; Zhao, Boyang; Colby, Lesley A; Rush, Howard G; Gilsdorf, Janet R; Jourdian, George W; He, Yongqun

    2008-01-01

    Vaccines are among the most efficacious and cost-effective tools for reducing morbidity and mortality caused by infectious diseases. The vaccine investigation and online information network (VIOLIN) is a web-based central resource, allowing easy curation, comparison and analysis of vaccine-related research data across various human pathogens (e.g. Haemophilus influenzae, human immunodeficiency virus (HIV) and Plasmodium falciparum) of medical importance and across humans, other natural hosts and laboratory animals. Vaccine-related peer-reviewed literature data have been downloaded into the database from PubMed and are searchable through various literature search programs. Vaccine data are also annotated, edited and submitted to the database through a web-based interactive system that integrates efficient computational literature mining and accurate manual curation. Curated information includes general microbial pathogenesis and host protective immunity, vaccine preparation and characteristics, stimulated host responses after vaccination and protection efficacy after challenge. Vaccine-related pathogen and host genes are also annotated and available for searching through customized BLAST programs. All VIOLIN data are available for download in an eXtensible Markup Language (XML)-based data exchange format. VIOLIN is expected to become a centralized source of vaccine information and to provide investigators in basic and clinical sciences with curated data and bioinformatics tools for vaccine research and development. VIOLIN is publicly available at http://www.violinet.org.

  9. A De Novo-Assembly Based Data Analysis Pipeline for Plant Obligate Parasite Metatranscriptomic Studies

    PubMed Central

    Guo, Li; Allen, Kelly S.; Deiulio, Greg; Zhang, Yong; Madeiras, Angela M.; Wick, Robert L.; Ma, Li-Jun

    2016-01-01

    Current and emerging plant diseases caused by obligate parasitic microbes such as rusts, downy mildews, and powdery mildews threaten worldwide crop production and food safety. These obligate parasites are typically unculturable in the laboratory, posing technical challenges to characterize them at the genetic and genomic level. Here we have developed a data analysis pipeline integrating several bioinformatic software programs. This pipeline facilitates rapid gene discovery and expression analysis of a plant host and its obligate parasite simultaneously by next generation sequencing of mixed host and pathogen RNA (i.e., metatranscriptomics). We applied this pipeline to metatranscriptomic sequencing data of sweet basil (Ocimum basilicum) and its obligate downy mildew parasite Peronospora belbahrii, both lacking a sequenced genome. Even with a single data point, we were able to identify both candidate host defense genes and pathogen virulence genes that are highly expressed during infection. This demonstrates the power of this pipeline for identifying genes important in host–pathogen interactions without prior genomic information for either the plant host or the obligate biotrophic pathogen. The simplicity of this pipeline makes it accessible to researchers with limited computational skills and applicable to metatranscriptomic data analysis in a wide range of plant-obligate-parasite systems. PMID:27462318

  10. Active Galactic Nuclei, Host Star Formation, and the Far Infrared

    NASA Astrophysics Data System (ADS)

    Draper, Aden R.; Ballantyne, D. R.

    2011-05-01

    Telescopes like Herschel and the Atacama Large Millimeter/submillimeter Array (ALMA) are creating new opportunities to study sources in the far infrared (FIR), a wavelength region dominated by cold dust emission. Probing cold dust in active galaxies allows for study of the star formation history of active galactic nuclei (AGN) hosts. The FIR is also an important spectral region for observing AGN which are heavily enshrouded by dust, such as Compton thick (CT) AGN. By using information from deep X-ray surveys and cosmic X-ray background synthesis models, we compute Cloudy photoionization simulations which are used to predict the spectral energy distribution (SED) of AGN in the FIR. Expected differential number counts of AGN and their host galaxies are calculated in the Herschel bands. The expected contribution of AGN and their hosts to the cosmic infrared background (CIRB) is also computed. Multiple star formation scenarios are investigated using a modified blackbody star formation SED. It is found that FIR observations at 350 and 500 um are an excellent tool in determining the star formation history of AGN hosts. Additionally, the AGN contribution to the CIRB can be used to determine whether star formation in AGN hosts evolves differently than in normal galaxies. AGN and host differential number counts are dominated by CT AGN in the Herschel-SPIRE bands. Therefore, X-ray stacking of bright SPIRE sources is likely to disclose a large fraction of the CT AGN population.

  11. Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management

    DTIC Science & Technology

    2016-11-16

    order for cloud computing infrastructures to be successfully deployed in real world scenarios as tools for crisis and catastrophe management, where...Statement of the Problem Studied As cloud computing becomes the dominant computational infrastructure[1] and cloud technologies make a transition to hosting...1. Formulate rigorous mathematical models representing technological capabilities and resources in cloud computing for performance modeling and

  12. Evolutionary Telemetry and Command Processor (TCP) architecture

    NASA Technical Reports Server (NTRS)

    Schneider, John R.

    1992-01-01

    A low cost, modular, high performance, and compact Telemetry and Command Processor (TCP) is being built as the foundation of command and data handling subsystems for the next generation of satellites. The TCP product line will support command and telemetry requirements for small to large spacecraft and from low to high rate data transmission. It is compatible with the latest TDRSS, STDN and SGLS transponders and provides CCSDS protocol communications in addition to standard TDM formats. Its high performance computer provides computing resources for hosted flight software. Layered and modular software provides common services using standardized interfaces to applications thereby enhancing software re-use, transportability, and interoperability. The TCP architecture is based on existing standards, distributed networking, distributed and open system computing, and packet technology. The first TCP application is planned for the 94 SDIO SPAS 3 mission. The architecture enhances rapid tailoring of functions thereby reducing costs and schedules developed for individual spacecraft missions.

  13. GRID INDEPENDENT FUEL CELL OPERATED SMART HOME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Mohammad S. Alam

    2003-12-07

    A fuel cell power plant, which utilizes a smart energy management and control (SEMaC) system, supplying the power need of laboratory based ''home'' has been purchased and installed. The ''home'' consists of two rooms, each approximately 250 sq. ft. Every appliance and power outlet is under the control of a host computer, running the SEMaC software package. It is possible to override the computer, in the event that an appliance or power outage is required. Detailed analysis and simulation of the fuel cell operated smart home has been performed. Two journal papers has been accepted for publication and another journalmore » paper is under review. Three theses have been completed and three additional theses are in progress.« less

  14. Ada Compiler Validation Summary Report: Harris Corporation, Computer System Division Harris Ada, Version 5.0, Harris HCX-2900 (Host & Target), 890627W1.10104

    DTIC Science & Technology

    1989-06-27

    e-~ff fa ILE’d (X#,: A -A21 1 624 AT I Cl PAGE 6F711 o. IY PE or SPIEOR! a PtE.O.,. COV’ERED Ada Compiler Validation Sunrary Repor t : Harri ., 27...June 1989 to 27 June 1990 Lorporatior. COMPUter 5SV te. DiVisionI harris A&,~, Version PL!fM~ Rp m~. -. 0, Harri s hCY-2QflO (H1-int cThlr~et z) 062 7V...ANSI/F.L-S7D- 1815A, Ada Joint Prograrm Office, AJPO 20. ARSIRA: I (Comtnut OM ?tWvSP Side of necciiorj end idern’f by block number) Harris

  15. Network Analyses in Plant Pathogens

    PubMed Central

    Botero, David; Alvarado, Camilo; Bernal, Adriana; Danies, Giovanna; Restrepo, Silvia

    2018-01-01

    Even in the age of big data in Biology, studying the connections between the biological processes and the molecular mechanisms behind them is a challenging task. Systems biology arose as a transversal discipline between biology, chemistry, computer science, mathematics, and physics to facilitate the elucidation of such connections. A scenario, where the application of systems biology constitutes a very powerful tool, is the study of interactions between hosts and pathogens using network approaches. Interactions between pathogenic bacteria and their hosts, both in agricultural and human health contexts are of great interest to researchers worldwide. Large amounts of data have been generated in the last few years within this area of research. However, studies have been relatively limited to simple interactions. This has left great amounts of data that remain to be utilized. Here, we review the main techniques in network analysis and their complementary experimental assays used to investigate bacterial-plant interactions. Other host-pathogen interactions are presented in those cases where few or no examples of plant pathogens exist. Furthermore, we present key results that have been obtained with these techniques and how these can help in the design of new strategies to control bacterial pathogens. The review comprises metabolic simulation, protein-protein interactions, regulatory control of gene expression, host-pathogen modeling, and genome evolution in bacteria. The aim of this review is to offer scientists working on plant-pathogen interactions basic concepts around network biology, as well as an array of techniques that will be useful for a better and more complete interpretation of their data. PMID:29441045

  16. Computer-Aided Molecular Design of Bis-phosphine Oxide Lanthanide Extractants

    DOE PAGES

    McCann, Billy W.; Silva, Nuwan De; Windus, Theresa L.; ...

    2016-02-17

    Computer-aided molecular design and high-throughput screening of viable host architectures can significantly reduce the efforts in the design of novel ligands for efficient extraction of rare earth elements. This paper presents a computational approach to the deliberate design of bis-phosphine oxide host architectures that are structurally organized for complexation of trivalent lanthanides. Molecule building software, HostDesigner, was interfaced with molecular mechanics software, PCModel, providing a tool for generating and screening millions of potential R 2(O)P-link-P(O)R 2 ligand geometries. The molecular mechanics ranking of ligand structures is consistent with both the solution-phase free energies of complexation obtained with density functional theorymore » and the performance of known bis-phosphine oxide extractants. For the case where link is -CH 2-, evaluation of the ligand geometry provides the first characterization of a steric origin for the ‘anomalous aryl strengthening’ effect. The design approach has identified a number of novel bis-phosphine oxide ligands that are better organized for lanthanide complexation than previously studied examples.« less

  17. Near Theoretical Gigabit Link Efficiency for Distributed Data Acquisition Systems

    PubMed Central

    Abu-Nimeh, Faisal T.; Choong, Woon-Seng

    2017-01-01

    Link efficiency, data integrity, and continuity for high-throughput and real-time systems is crucial. Most of these applications require specialized hardware and operating systems as well as extensive tuning in order to achieve high efficiency. Here, we present an implementation of gigabit Ethernet data streaming which can achieve 99.26% link efficiency while maintaining no packet losses. The design and implementation are built on OpenPET, an opensource data acquisition platform for nuclear medical imaging, where (a) a crate hosting multiple OpenPET detector boards uses a User Datagram Protocol over Internet Protocol (UDP/IP) Ethernet soft-core, that is capable of understanding PAUSE frames, to stream data out to a computer workstation; (b) the receiving computer uses Netmap to allow the processing software (i.e., user space), which is written in Python, to directly receive and manage the network card’s ring buffers, bypassing the operating system kernel’s networking stack; and (c) a multi-threaded application using synchronized queues is implemented in the processing software (Python) to free up the ring buffers as quickly as possible while preserving data integrity and flow continuity. PMID:28630948

  18. Near Theoretical Gigabit Link Efficiency for Distributed Data Acquisition Systems.

    PubMed

    Abu-Nimeh, Faisal T; Choong, Woon-Seng

    2017-03-01

    Link efficiency, data integrity, and continuity for high-throughput and real-time systems is crucial. Most of these applications require specialized hardware and operating systems as well as extensive tuning in order to achieve high efficiency. Here, we present an implementation of gigabit Ethernet data streaming which can achieve 99.26% link efficiency while maintaining no packet losses. The design and implementation are built on OpenPET, an opensource data acquisition platform for nuclear medical imaging, where (a) a crate hosting multiple OpenPET detector boards uses a User Datagram Protocol over Internet Protocol (UDP/IP) Ethernet soft-core, that is capable of understanding PAUSE frames, to stream data out to a computer workstation; (b) the receiving computer uses Netmap to allow the processing software (i.e., user space), which is written in Python, to directly receive and manage the network card's ring buffers, bypassing the operating system kernel's networking stack; and (c) a multi-threaded application using synchronized queues is implemented in the processing software (Python) to free up the ring buffers as quickly as possible while preserving data integrity and flow continuity.

  19. The cyber threat landscape: Challenges and future research directions

    NASA Astrophysics Data System (ADS)

    Gil, Santiago; Kott, Alexander; Barabási, Albert-László

    2014-07-01

    While much attention has been paid to the vulnerability of computer networks to node and link failure, there is limited systematic understanding of the factors that determine the likelihood that a node (computer) is compromised. We therefore collect threat log data in a university network to study the patterns of threat activity for individual hosts. We relate this information to the properties of each host as observed through network-wide scans, establishing associations between the network services a host is running and the kinds of threats to which it is susceptible. We propose a methodology to associate services to threats inspired by the tools used in genetics to identify statistical associations between mutations and diseases. The proposed approach allows us to determine probabilities of infection directly from observation, offering an automated high-throughput strategy to develop comprehensive metrics for cyber-security.

  20. A genetic epidemiology approach to cyber-security.

    PubMed

    Gil, Santiago; Kott, Alexander; Barabási, Albert-László

    2014-07-16

    While much attention has been paid to the vulnerability of computer networks to node and link failure, there is limited systematic understanding of the factors that determine the likelihood that a node (computer) is compromised. We therefore collect threat log data in a university network to study the patterns of threat activity for individual hosts. We relate this information to the properties of each host as observed through network-wide scans, establishing associations between the network services a host is running and the kinds of threats to which it is susceptible. We propose a methodology to associate services to threats inspired by the tools used in genetics to identify statistical associations between mutations and diseases. The proposed approach allows us to determine probabilities of infection directly from observation, offering an automated high-throughput strategy to develop comprehensive metrics for cyber-security.

  1. A genetic epidemiology approach to cyber-security

    PubMed Central

    Gil, Santiago; Kott, Alexander; Barabási, Albert-László

    2014-01-01

    While much attention has been paid to the vulnerability of computer networks to node and link failure, there is limited systematic understanding of the factors that determine the likelihood that a node (computer) is compromised. We therefore collect threat log data in a university network to study the patterns of threat activity for individual hosts. We relate this information to the properties of each host as observed through network-wide scans, establishing associations between the network services a host is running and the kinds of threats to which it is susceptible. We propose a methodology to associate services to threats inspired by the tools used in genetics to identify statistical associations between mutations and diseases. The proposed approach allows us to determine probabilities of infection directly from observation, offering an automated high-throughput strategy to develop comprehensive metrics for cyber-security. PMID:25028059

  2. Analysis of an algorithm for distributed recognition and accountability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, C.; Frincke, D.A.; Goan, T. Jr.

    1993-08-01

    Computer and network systems are available to attacks. Abandoning the existing huge infrastructure of possibly-insecure computer and network systems is impossible, and replacing them by totally secure systems may not be feasible or cost effective. A common element in many attacks is that a single user will often attempt to intrude upon multiple resources throughout a network. Detecting the attack can become significantly easier by compiling and integrating evidence of such intrusion attempts across the network rather than attempting to assess the situation from the vantage point of only a single host. To solve this problem, we suggest an approachmore » for distributed recognition and accountability (DRA), which consists of algorithms which ``process,`` at a central location, distributed and asynchronous ``reports`` generated by computers (or a subset thereof) throughout the network. Our highest-priority objectives are to observe ways by which an individual moves around in a network of computers, including changing user names to possibly hide his/her true identity, and to associate all activities of multiple instance of the same individual to the same network-wide user. We present the DRA algorithm and a sketch of its proof under an initial set of simplifying albeit realistic assumptions. Later, we relax these assumptions to accommodate pragmatic aspects such as missing or delayed ``reports,`` clock slew, tampered ``reports,`` etc. We believe that such algorithms will have widespread applications in the future, particularly in intrusion-detection system.« less

  3. Virtual Computing Laboratories: A Case Study with Comparisons to Physical Computing Laboratories

    ERIC Educational Resources Information Center

    Burd, Stephen D.; Seazzu, Alessandro F.; Conway, Christopher

    2009-01-01

    Current technology enables schools to provide remote or virtual computing labs that can be implemented in multiple ways ranging from remote access to banks of dedicated workstations to sophisticated access to large-scale servers hosting virtualized workstations. This paper reports on the implementation of a specific lab using remote access to…

  4. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  5. Investigation of type-I interferon dysregulation by arenaviruses : a multidisciplinary approach.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozina, Carol L.; Moorman, Matthew Wallace; Branda, Catherine

    2011-09-01

    This report provides a detailed overview of the work performed for project number 130781, 'A Systems Biology Approach to Understanding Viral Hemorrhagic Fever Pathogenesis.' We report progress in five key areas: single cell isolation devices and control systems, fluorescent cytokine and transcription factor reporters, on-chip viral infection assays, molecular virology analysis of Arenavirus nucleoprotein structure-function, and development of computational tools to predict virus-host protein interactions. Although a great deal of work remains from that begun here, we have developed several novel single cell analysis tools and knowledge of Arenavirus biology that will facilitate and inform future publications and funding proposals.

  6. Portable Microcomputer Utilization for On-Line Pulmonary Testing

    PubMed Central

    Pugh, R.; Fourre, J.; Karetzky, M.

    1981-01-01

    A host-remote pulmonary function testing system is described that is flexible, non-dedicated, inexpensive, and readily upgradable. It is applicable for laboratories considering computerization as well as for those which have converted to one of the already available but restricted systems. The remote unit has an 8 slot bus for memory, input-output boards, and an A-D converter. It has its own terminal for manual input and display of computed and measured data which is transmitted via an acoustic modem to a larger microcomputer. The program modules are written in Pascal-Z and/or the supplied Z-80 macro assembler as external procedures.

  7. Cloud-Based Computational Tools for Earth Science Applications

    NASA Astrophysics Data System (ADS)

    Arendt, A. A.; Fatland, R.; Howe, B.

    2015-12-01

    Earth scientists are increasingly required to think across disciplines and utilize a wide range of datasets in order to solve complex environmental challenges. Although significant progress has been made in distributing data, researchers must still invest heavily in developing computational tools to accommodate their specific domain. Here we document our development of lightweight computational data systems aimed at enabling rapid data distribution, analytics and problem solving tools for Earth science applications. Our goal is for these systems to be easily deployable, scalable and flexible to accommodate new research directions. As an example we describe "Ice2Ocean", a software system aimed at predicting runoff from snow and ice in the Gulf of Alaska region. Our backend components include relational database software to handle tabular and vector datasets, Python tools (NumPy, pandas and xray) for rapid querying of gridded climate data, and an energy and mass balance hydrological simulation model (SnowModel). These components are hosted in a cloud environment for direct access across research teams, and can also be accessed via API web services using a REST interface. This API is a vital component of our system architecture, as it enables quick integration of our analytical tools across disciplines, and can be accessed by any existing data distribution centers. We will showcase several data integration and visualization examples to illustrate how our system has expanded our ability to conduct cross-disciplinary research.

  8. PREFACE: Strongly Coupled Coulomb Systems Strongly Coupled Coulomb Systems

    NASA Astrophysics Data System (ADS)

    Neilson, David; Senatore, Gaetano

    2009-05-01

    This special issue contains papers presented at the International Conference on Strongly Coupled Coulomb Systems (SCCS), held from 29 July-2 August 2008 at the University of Camerino. Camerino is an ancient hill-top town located in the Apennine mountains of Italy, 200 kilometres northeast of Rome, with a university dating back to 1336. The Camerino conference was the 11th in a series which started in 1977: 1977: Orleans-la-Source, France, as a NATO Advanced Study Institute on Strongly Coupled Plasmas (hosted by Marc Feix and Gabor J Kalman) 1982: Les Houches, France (hosted by Marc Baus and Jean-Pierre Hansen) 1986: Santa Cruz, California, USA (hosted by Forrest J Rogers and Hugh E DeWitt) 1989: Tokyo, Japan (hosted by Setsuo Ichimaru) 1992: Rochester, New York, USA (hosted by Hugh M Van Horn and Setsuo Ichimaru) 1995: Binz, Germany (hosted by Wolf Dietrich Kraeft and Manfred Schlanges) 1997: Boston, Massachusetts, USA (hosted by Gabor J Kalman) 1999: St Malo, France (hosted by Claude Deutsch and Bernard Jancovici) 2002: Santa Fe, New Mexico, USA (hosted by John F Benage and Michael S Murillo) 2005: Moscow, Russia (hosted by Vladimir E Fortov and Vladimir Vorob'ev). The name of the series was changed in 1996 from Strongly Coupled Plasmas to Strongly Coupled Coulomb Systems to reflect a wider range of topics. 'Strongly Coupled Coulomb Systems' encompasses diverse many-body systems and physical conditions. The purpose of the conferences is to provide a regular international forum for the presentation and discussion of research achievements and ideas relating to a variety of plasma, liquid and condensed matter systems that are dominated by strong Coulomb interactions between their constituents. Each meeting has seen an evolution of topics and emphases that have followed new discoveries and new techniques. The field has continued to see new experimental tools and access to new strongly coupled conditions, most recently in the areas of warm matter, dusty plasmas, condensed matter and ultra-cold plasmas. One hundred and thirty participants came from twenty countries and four continents to participate in the conference. Those giving presentations were asked to contribute to this special issue to make a representative record of an interesting conference. We thank the International Advisory Board and the Programme Committee for their support and suggestions. We thank the Local Organizing Committee (Stefania De Palo, Vittorio Pellegrini, Andrea Perali and Pierbiagio Pieri) for all their efforts. We highlight for special mention the dedication displayed by Andrea Perali, by Rocco di Marco for computer support, and by our tireless conference secretary Fiorella Paino. The knowledgeable guided tour of the historic centre of Camerino given by Fiorella Paino was appreciated by many participants. It is no exaggeration to say that without the extraordinary efforts put in by these three, the conference could not have been the success that it was. For their sustained interest and support we thank Fulvio Esposito, Rector of the University of Camerino, Fabio Beltram, Director of NEST, Scuola Normale Superiore, Pisa, and Daniel Cox, Co-Director of ICAM, University of California at Davis. We thank the Institute of Complex and Adaptive Matter ICAM-I2CAM, USA for providing a video record of the conference on the web (found at http://sccs2008.df.unicam.it/). Finally we thank the conference sponsors for their very generous support: the University of Camerino, the Institute of Complex and Adaptive Matter ICAM-I2CAM, USA, the International Centre for Theoretical Physics ICTP Trieste, and CNR-INFM DEMOCRITOS Modeling Center for Research in Atomistic Simulation, Trieste. Participants at the International Conference on Strongly Coupled Coulomb Systems (SCCS) (University of Camerino, Italy, 29 July-2 August 2008).

  9. UNIX security in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1989-01-01

    The author critiques some security mechanisms in most versions of the Unix operating system and suggests more effective tools that either have working prototypes or have been implemented, for example in secure Unix systems. Although no computer (not even a secure one) is impenetrable, breaking into systems with these alternate mechanisms will cost more, require more skill, and be more easily detected than penetrations of systems without these mechanisms. The mechanisms described fall into four classes (with considerable overlap). User authentication at the local host affirms the identity of the person using the computer. The principle of least privilege dictates that properly authenticated users should have rights precisely sufficient to perform their tasks, and system administration functions should be compartmentalized; to this end, access control lists or capabilities should either replace or augment the default Unix protection system, and mandatory access controls implementing multilevel security models and integrity mechanisms should be available. Since most users access supercomputing environments using networks, the third class of mechanisms augments authentication (where feasible). As no security is perfect, the fourth class of mechanism logs events that may indicate possible security violations; this will allow the reconstruction of a successful penetration (if discovered), or possibly the detection of an attempted penetration.

  10. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    NASA Astrophysics Data System (ADS)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  11. The targeting of plant cellular systems by injected type III effector proteins.

    PubMed

    Lewis, Jennifer D; Guttman, David S; Desveaux, Darrell

    2009-12-01

    The battle between phytopathogenic bacteria and their plant hosts has revealed a diverse suite of strategies and mechanisms employed by the pathogen or the host to gain the higher ground. Pathogens continually evolve tactics to acquire host resources and dampen host defences. Hosts must evolve surveillance and defence systems that are sensitive enough to rapidly respond to a diverse range of pathogens, while reducing costly and damaging inappropriate misexpression. The primary virulence mechanism employed by many bacteria is the type III secretion system, which secretes and translocates effector proteins directly into the cells of their plant hosts. Effectors have diverse enzymatic functions and can target specific components of plant systems. While these effectors should favour bacterial fitness, the host may be able to thwart infection by recognizing the activity or presence of these foreign molecules and initiating retaliatory immune measures. We review the diverse host cellular systems exploited by bacterial effectors, with particular focus on plant proteins directly targeted by effectors. Effector-host interactions reveal different stages of the battle between pathogen and host, as well as the diverse molecular strategies employed by bacterial pathogens to hijack eukaryotic cellular systems.

  12. Enhancing data utilization through adoption of cloud-based data architectures (Invited Paper 211869)

    NASA Astrophysics Data System (ADS)

    Kearns, E. J.

    2017-12-01

    A traditional approach to data distribution and utilization of open government data involves continuously moving those data from a central government location to each potential user, who would then utilize them on their local computer systems. An alternate approach would be to bring those users to the open government data, where users would also have access to computing and analytics capabilities that would support data utilization. NOAA's Big Data Project is exploring such an alternate approach through an experimental collaboration with Amazon Web Services, Google Cloud Platform, IBM, Microsoft Azure, and the Open Commons Consortium. As part of this ongoing experiment, NOAA is providing open data of interest which are freely hosted by the Big Data Project Collaborators, who provide a variety of cloud-based services and capabilities to enable utilization by data users. By the terms of the agreement, the Collaborators may charge for those value-added services and processing capacities to recover their costs to freely host the data and to generate profits if so desired. Initial results have shown sustained increases in data utilization from 2 to over 100 times previously-observed access patterns from traditional approaches. Significantly increased utilization speed as compared to the traditional approach has also been observed by NOAA data users who have volunteered their experiences on these cloud-based systems. The potential for implementing and sustaining the alternate cloud-based approach as part of a change in operational data utilization strategies will be discussed.

  13. The Chern-Simons Current in Systems of DNA-RNA Transcriptions

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; Pincak, Richard; Kanjamapornkul, Kabin; Saridakis, Emmanuel N.

    2018-04-01

    A Chern-Simons current, coming from ghost and anti-ghost fields of supersymmetry theory, can be used to define a spectrum of gene expression in new time series data where a spinor field, as alternative representation of a gene, is adopted instead of using the standard alphabet sequence of bases $A, T, C, G, U$. After a general discussion on the use of supersymmetry in biological systems, we give examples of the use of supersymmetry for living organism, discuss the codon and anti-codon ghost fields and develop an algebraic construction for the trash DNA, the DNA area which does not seem active in biological systems. As a general result, all hidden states of codon can be computed by Chern-Simons 3 forms. Finally, we plot a time series of genetic variations of viral glycoprotein gene and host T-cell receptor gene by using a gene tensor correlation network related to the Chern-Simons current. An empirical analysis of genetic shift, in host cell receptor genes with separated cluster of gene and genetic drift in viral gene, is obtained by using a tensor correlation plot over time series data derived as the empirical mode decomposition of Chern-Simons current.

  14. PAM: Particle automata model in simulation of Fusarium graminearum pathogen expansion.

    PubMed

    Wcisło, Rafał; Miller, S Shea; Dzwinel, Witold

    2016-01-21

    The multi-scale nature and inherent complexity of biological systems are a great challenge for computer modeling and classical modeling paradigms. We present a novel particle automata modeling metaphor in the context of developing a 3D model of Fusarium graminearum infection in wheat. The system consisting of the host plant and Fusarium pathogen cells can be represented by an ensemble of discrete particles defined by a set of attributes. The cells-particles can interact with each other mimicking mechanical resistance of the cell walls and cell coalescence. The particles can move, while some of their attributes can be changed according to prescribed rules. The rules can represent cellular scales of a complex system, while the integrated particle automata model (PAM) simulates its overall multi-scale behavior. We show that due to the ability of mimicking mechanical interactions of Fusarium tip cells with the host tissue, the model is able to simulate realistic penetration properties of the colonization process reproducing both vertical and lateral Fusarium invasion scenarios. The comparison of simulation results with micrographs from laboratory experiments shows encouraging qualitative agreement between the two. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Robust integer and fractional helical modes in the quantum Hall effect

    NASA Astrophysics Data System (ADS)

    Ronen, Yuval; Cohen, Yonatan; Banitt, Daniel; Heiblum, Moty; Umansky, Vladimir

    2018-04-01

    Electronic systems harboring one-dimensional helical modes, where spin and momentum are locked, have lately become an important field of their own. When coupled to a conventional superconductor, such systems are expected to manifest topological superconductivity; a unique phase hosting exotic Majorana zero modes. Even more interesting are fractional helical modes, yet to be observed, which open the route for realizing generalized parafermions. Possessing non-Abelian exchange statistics, these quasiparticles may serve as building blocks in topological quantum computing. Here, we present a new approach to form protected one-dimensional helical edge modes in the quantum Hall regime. The novel platform is based on a carefully designed double-quantum-well structure in a GaAs-based system hosting two electronic sub-bands; each tuned to the quantum Hall effect regime. By electrostatic gating of different areas of the structure, counter-propagating integer, as well as fractional, edge modes with opposite spins are formed. We demonstrate that, due to spin protection, these helical modes remain ballistic over large distances. In addition to the formation of helical modes, this platform can serve as a rich playground for artificial induction of compounded fractional edge modes, and for construction of edge-mode-based interferometers.

  16. Space Ultrareliable Modular Computer (SUMC) instruction simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1972-01-01

    The design principles, description, functional operation, and recommended expansion and enhancements are presented for the Space Ultrareliable Modular Computer interpretive simulator. Included as appendices are the user's manual, program module descriptions, target instruction descriptions, simulator source program listing, and a sample program printout. In discussing the design and operation of the simulator, the key problems involving host computer independence and target computer architectural scope are brought into focus.

  17. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  18. Evolutionary implications of the adaptation to different immune systems in a parasite with a complex life cycle

    PubMed Central

    Hammerschmidt, Katrin; Kurtz, Joachim

    2005-01-01

    Many diseases are caused by parasites with complex life cycles that involve several hosts. If parasites cope better with only one of the different types of immune systems of their host species, we might expect a trade-off in parasite performance in the different hosts, that likely influences the evolution of virulence. We tested this hypothesis in a naturally co-evolving host–parasite system consisting of the tapeworm Schistocephalus solidus and its intermediate hosts, a copepod, Macrocyclops albidus, and the three-spined stickleback Gasterosteus aculeatus. We did not find a trade-off between infection success in the two hosts. Rather, tapeworms seem to trade-off adaptation towards different parts of their hosts' immune systems. Worm sibships that performed better in the invertebrate host also seem to be able to evade detection by the fish innate defence systems, i.e. induce lower levels of activation of innate immune components. These worm variants were less harmful for the fish host likely due to reduced costs of an activated innate immune system. These findings substantiate the impact of both hosts' immune systems on parasite performance and virulence. PMID:16271977

  19. Default Parallels Plesk Panel Page

    Science.gov Websites

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  20. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  1. Morphology of methane hydrate host sediments

    USGS Publications Warehouse

    Jones, K.W.; Feng, H.; Tomov, S.; Winters, W.J.; Eaton, M.; Mahajan, D.

    2005-01-01

    The morphological features including porosity and grains of methane hydrate host sediments were investigated using synchrotron computed microtomography (CMT) technique. The sediment sample was obtained during Ocean Drilling Program Leg 164 on the Blake Ridge at water depth of 2278.5 m. The CMT experiment was performed at the Brookhaven National Synchrotron Light Source facility. The analysis gave ample porosity, specific surface area, mean particle size, and tortuosity. The method was found to be highly effective for the study of methane hydrate host sediments.

  2. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    NASA Astrophysics Data System (ADS)

    Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R. T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A. J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Yildiz, S. C.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.

  3. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    DOE PAGES

    Bartoldus, R.; Claus, R.; Garelli, N.; ...

    2016-01-25

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all ofmore » these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. In conclusion, we will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.« less

  4. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  5. Multiplicity of Mathematical Modeling Strategies to Search for Molecular and Cellular Insights into Bacteria Lung Infection

    PubMed Central

    Cantone, Martina; Santos, Guido; Wentker, Pia; Lai, Xin; Vera, Julio

    2017-01-01

    Even today two bacterial lung infections, namely pneumonia and tuberculosis, are among the 10 most frequent causes of death worldwide. These infections still lack effective treatments in many developing countries and in immunocompromised populations like infants, elderly people and transplanted patients. The interaction between bacteria and the host is a complex system of interlinked intercellular and the intracellular processes, enriched in regulatory structures like positive and negative feedback loops. Severe pathological condition can emerge when the immune system of the host fails to neutralize the infection. This failure can result in systemic spreading of pathogens or overwhelming immune response followed by a systemic inflammatory response. Mathematical modeling is a promising tool to dissect the complexity underlying pathogenesis of bacterial lung infection at the molecular, cellular and tissue levels, and also at the interfaces among levels. In this article, we introduce mathematical and computational modeling frameworks that can be used for investigating molecular and cellular mechanisms underlying bacterial lung infection. Then, we compile and discuss published results on the modeling of regulatory pathways and cell populations relevant for lung infection and inflammation. Finally, we discuss how to make use of this multiplicity of modeling approaches to open new avenues in the search of the molecular and cellular mechanisms underlying bacterial infection in the lung. PMID:28912729

  6. Multiplicity of Mathematical Modeling Strategies to Search for Molecular and Cellular Insights into Bacteria Lung Infection.

    PubMed

    Cantone, Martina; Santos, Guido; Wentker, Pia; Lai, Xin; Vera, Julio

    2017-01-01

    Even today two bacterial lung infections, namely pneumonia and tuberculosis, are among the 10 most frequent causes of death worldwide. These infections still lack effective treatments in many developing countries and in immunocompromised populations like infants, elderly people and transplanted patients. The interaction between bacteria and the host is a complex system of interlinked intercellular and the intracellular processes, enriched in regulatory structures like positive and negative feedback loops. Severe pathological condition can emerge when the immune system of the host fails to neutralize the infection. This failure can result in systemic spreading of pathogens or overwhelming immune response followed by a systemic inflammatory response. Mathematical modeling is a promising tool to dissect the complexity underlying pathogenesis of bacterial lung infection at the molecular, cellular and tissue levels, and also at the interfaces among levels. In this article, we introduce mathematical and computational modeling frameworks that can be used for investigating molecular and cellular mechanisms underlying bacterial lung infection. Then, we compile and discuss published results on the modeling of regulatory pathways and cell populations relevant for lung infection and inflammation. Finally, we discuss how to make use of this multiplicity of modeling approaches to open new avenues in the search of the molecular and cellular mechanisms underlying bacterial infection in the lung.

  7. A design and implementation methodology for diagnostic systems

    NASA Technical Reports Server (NTRS)

    Williams, Linda J. F.

    1988-01-01

    A methodology for design and implementation of diagnostic systems is presented. Also discussed are the advantages of embedding a diagnostic system in a host system environment. The methodology utilizes an architecture for diagnostic system development that is hierarchical and makes use of object-oriented representation techniques. Additionally, qualitative models are used to describe the host system components and their behavior. The methodology architecture includes a diagnostic engine that utilizes a combination of heuristic knowledge to control the sequence of diagnostic reasoning. The methodology provides an integrated approach to development of diagnostic system requirements that is more rigorous than standard systems engineering techniques. The advantages of using this methodology during various life cycle phases of the host systems (e.g., National Aerospace Plane (NASP)) include: the capability to analyze diagnostic instrumentation requirements during the host system design phase, a ready software architecture for implementation of diagnostics in the host system, and the opportunity to analyze instrumentation for failure coverage in safety critical host system operations.

  8. A systematic review of technology-based interventions for unintentional injury prevention education and behaviour change.

    PubMed

    Omaki, Elise; Rizzutti, Nicholas; Shields, Wendy; Zhu, Jeffrey; McDonald, Eileen; Stevens, Martha W; Gielen, Andrea

    2017-04-01

    The aims of this literature review are to (1) summarise how computer and mobile technology-based health behaviour change applications have been evaluated in unintentional injury prevention, (2) describe how these successes can be applied to injury-prevention programmes in the future and (3) identify research gaps. Studies included in this systematic review were education and behaviour change intervention trials and programme evaluations in which the intervention was delivered by either a computer or mobile technology and addressed an unintentional injury prevention topic. Articles were limited to those published in English and after 1990. Among the 44 technology-based injury-prevention studies included in this review, 16 studies evaluated locally hosted software programmes, 4 studies offered kiosk-based programmes, 11 evaluated remotely hosted internet programmes, 2 studies used mobile technology or portable devices and 11 studies evaluated virtual-reality interventions. Locally hosted software programmes and remotely hosted internet programmes consistently increased knowledge and behaviours. Kiosk programmes showed evidence of modest knowledge and behaviour gains. Both programmes using mobile technology improved behaviours. Virtual-reality programmes consistently improved behaviours, but there were little gains in knowledge. No studies evaluated text-messaging programmes dedicated to injury prevention. There is much potential for computer-based programmes to be used for injury-prevention behaviour change. The reviewed studies provide evidence that computer-based communication is effective in conveying information and influencing how participants think about an injury topic and adopt safety behaviours. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. The Development of New Atmospheric Models for K and M DwarfStars with Exoplanets

    NASA Astrophysics Data System (ADS)

    Linsky, Jeffrey L.

    2018-01-01

    The ultraviolet and X-ray emissions of host stars play critical roles in the survival and chemical composition of the atmospheres of their exoplanets. The need to measure and understand this radiative output, in particular for K and M dwarfs, is the main rationale for computing a new generation of stellar models that includes magnetically heated chromospheres and coronae in addition to their photospheres. We describe our method for computing semi-empirical models that includes solutions of the statistical equilibrium equations for 52 atoms and ions and of the non-LTE radiative transfer equations for all important spectral lines. The code is an offspring of the Solar Radiation Physical Modelling system (SRPM) developed by Fontenla et al. (2007--2015) to compute one-dimensional models in hydrostatic equilibrium to fit high-resolution stellar X-ray to IR spectra. Also included are 20 diatomic molecules and their more than 2 million spectral lines. Our-proof-of-concept model is for the M1.5 V star GJ 832 (Fontenla et al. ApJ 830, 154 (2016)). We will fit the line fluxes and profiles of X-ray lines and continua observed by Chandra and XMM-Newton, UV lines observed by the COS and STIS instruments on HST (N V, C IV, Si IV, Si III, Mg II, C II, and O I), optical lines (including H$\\alpha$, Ca II, Na I), and continua. These models will allow us to compute extreme-UV spectra, which are unobservable but required to predict the hydrodynamic mass-loss rate from exoplanet atmospheres, and to predict panchromatic spectra of new exoplanet host stars discovered after the end of the HST mission.This work is supported by grant HST-GO-15038 from the Space Telescope Science Institute to the Univ. of Colorado

  10. From Archi Torture to Architecture: Undergraduate Students Design and Implement Computers Using the Multimedia Logic Emulator

    ERIC Educational Resources Information Center

    Stanley, Timothy D.; Wong, Lap Kei; Prigmore, Daniel; Benson, Justin; Fishler, Nathan; Fife, Leslie; Colton, Don

    2007-01-01

    Students learn better when they both hear and do. In computer architecture courses "doing" can be difficult in small schools without hardware laboratories hosted by computer engineering, electrical engineering, or similar departments. Software solutions exist. Our success with George Mills' Multimedia Logic (MML) is the focus of this paper. MML…

  11. Flexible Animation Computer Program

    NASA Technical Reports Server (NTRS)

    Stallcup, Scott S.

    1990-01-01

    FLEXAN (Flexible Animation), computer program animating structural dynamics on Evans and Sutherland PS300-series graphics workstation with VAX/VMS host computer. Typical application is animation of spacecraft undergoing structural stresses caused by thermal and vibrational effects. Displays distortions in shape of spacecraft. Program displays single natural mode of vibration, mode history, or any general deformation of flexible structure. Written in FORTRAN 77.

  12. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.

  13. Electronic collection system for spacelab mission timeline requirements

    NASA Technical Reports Server (NTRS)

    Lindberg, James P.; Piner, John R.; Huang, Allen K. H.

    1995-01-01

    This paper describes the Functional Objective Requirements Collection System (FORCS) software tool that has been developed for use by Principal Investigators (PI's) and Payload Element Developers (PED's) on their own personal computers to develop on-orbit timelining requirements for their payloads. The FORCS tool can be used either in a totally stand-alone mode, storing the information in a local file on the user's personal computer hard disk or in a remote mode where the user's computer is linked to a host computer containing the integrated database of the timeline requirements for all of the payloads on a mission. There are a number of features incorporated in the FORCS software to assist the user. The user may move freely back and forth between the various forms for inputting the data. Several methods are used to input the information, depending on the type of the information. These methods range from filling in text boxes, using check boxes and radio buttons, to inputting information into a spreadsheet format. There are automated features provided to assist in developing the proper format for the data, ranging from limit checking on some of the parameters to automatic conversion of different formats of time data inputs to the one standard format used for the timeline scheduling software.

  14. PIV/HPIV Film Analysis Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.

  15. NREL Analysis: Reimagining What's Possible for Clean Energy, Continuum Magazine, Summer 2015 / Issue 8; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This issue of Continuum Magazine covers the depth and breadth of NREL's ever-expanding analytical capabilities. For example, in one project we are leading national efforts to create a computer model of one of the most complex systems ever built. This system, the eastern part of the North American power grid, will likely host an increasing percentage of renewable energy in years to come. Understanding how this system will work is important to its success - and NREL analysis is playing a major role. We are also identifying the connections among energy, the environment and the economy through analysis that willmore » point us toward a 'water smart' future.« less

  16. Personalized Learning Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Analysis and Simulation Inc. products, IEPLANNER and TPLAN, make use of C Language Integrated Production System (CLIPS), a NASA-developed expert system shell which originated at Johnson Space Center. Both products are interactive computer-based systems. They can be run independently or together as one complete system. Utilized as an Individual Education Plan tool, a user of IEPLANNER and TPLAN can define a goals list, while identifying a host of student demands in motor skills, socials skills, life skills, even legal and leisure needs in the user's area. This computerized, expert tutor and advisor allows assessment of the status of the student and the degree to which his/her needs are being met. NASA Small Business Innovation Research contracts have also supported the company Human Memory Extension technology and the creation of a World Wide Web 3D browser.

  17. The Interplanetary Meteoroid Environment for eXploration

    NASA Astrophysics Data System (ADS)

    Soja, R.; Sommer, M.; Srama, R.; Strub, P.; Grün, E.; Rodmann, J.; Vaubaillon, J.; Hornig, A.; Bausch, L.

    2014-07-01

    The Interplanetary Meteoroid Environment for eXploration (IMEX) project, funded by the European Space Agency (ESA), aims to characterize dust trails and streams produced by comets in the inner solar system. The goal is to predict meteor showers at any position or time in the solar system, such as at specific spacecraft or planets. This model will allow for the assessment of the dust impact hazard to spacecraft, which is important because hypervelocity impacts of micrometeoroids can damage or destroy spacecraft or their subsystems through physical damage or electromagnetic effects. Such considerations are particularly important in the context of human exploration of the solar system. Additionally, such a model will allow for scientific study of specific trails and their connections to observed dust phenomena, such as cometary trails and new meteor showers at Earth. We have recently expanded the model to include explicit integrations of large numbers of particles from each comet, utilizing the Constellation platform to perform the calculations. This is a distributed computing system, where currently 10,000 users are donating their idle computing time at home and thus generating a virtual supercomputer of 40,000 host PCs connected via the Internet (aerospaceresearch.net). This form of citizen science provides the required computing performance for simulating millions of particles ejected by each of the ˜400 comets, while developing the relationship between scientists and the general public. The result will be a unique set of saved orbital information for a large number of cometary streams, allowing efficient computation of their locations at any point in space and time. Here we will present the results from several test streams and discuss the progress towards obtaining the full set of integrated particles for each of the selected ˜400 short-period comets. individual Constellation users for their computing time.

  18. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.

  19. Effects of damping-off caused by Rhizoctonia solani anastomosis group 2-1 on roots of wheat and oil seed rape quantified using X-ray Computed Tomography and real-time PCR.

    PubMed

    Sturrock, Craig J; Woodhall, James; Brown, Matthew; Walker, Catherine; Mooney, Sacha J; Ray, Rumiana V

    2015-01-01

    Rhizoctonia solani is a plant pathogenic fungus that causes significant establishment and yield losses to several important food crops globally. This is the first application of high resolution X-ray micro Computed Tomography (X-ray μCT) and real-time PCR to study host-pathogen interactions in situ and elucidate the mechanism of Rhizoctonia damping-off disease over a 6-day period caused by R. solani, anastomosis group (AG) 2-1 in wheat (Triticum aestivum cv. Gallant) and oil seed rape (OSR, Brassica napus cv. Marinka). Temporal, non-destructive analysis of root system architectures was performed using RooTrak and validated by the destructive method of root washing. Disease was assessed visually and related to pathogen DNA quantification in soil using real-time PCR. R. solani AG2-1 at similar initial DNA concentrations in soil was capable of causing significant damage to the developing root systems of both wheat and OSR. Disease caused reductions in primary root number, root volume, root surface area, and convex hull which were affected less in the monocotyledonous host. Wheat was more tolerant to the pathogen, exhibited fewer symptoms and developed more complex root systems. In contrast, R. solani caused earlier damage and maceration of the taproot of the dicot, OSR. Disease severity was related to pathogen DNA accumulation in soil only for OSR, however, reductions in root traits were significantly associated with both disease and pathogen DNA. The method offers the first steps in advancing current understanding of soil-borne pathogen behavior in situ at the pore scale, which may lead to the development of mitigation measures to combat disease influence in the field.

  20. NGAP: Compliance as a Service

    NASA Astrophysics Data System (ADS)

    McLaughlin, B. D.; Pawloski, A. W.

    2015-12-01

    Modern development practices require the ability to quickly and easily host an application. Small projects cannot afford to maintain a large staff for infrastructure maintenance. Rapid prototyping fosters innovation. However, maintaining the integrity of data and systems demands care, particularly in a government context. The extensive data holdings that make up much of the value of NASA's EOSDIS (Earth Observing System Data and Information System) are stored in a number of locations, across a wide variety of applications, ranging from small prototypes to large computationally-intensive operational processes.However, it is increasingly difficult for an application to implement the required security controls, perform required registrations and inventory entries, ensure logging, monitoring, patching, and then ensure that all these activities continue for the life of that application, let alone five, or ten, or fifty applications. This process often takes weeks or months to complete and requires expertise in a variety of different domains such as security, systems administration, development, etc.NGAP, the Next Generation Application Platform, is tackling this problem by investigating, automating, and resolving many of the repeatable policy hurdles that a typical application must overcome. This platform provides a relatively simple and straightforward process by which applications can commit source code to a repository and then deploy that source code to a cloud-based infrastructure, all while meeting NASA's policies for security, governance, inventory, reliability, and availability. While there is still work for the application owner for any application hosting, NGAP handles a significant portion of that work.This talk will discuss areas where we have made significant progress, areas that are complex or must remain human-intensive, and areas where we are still striving to improve this application deployment and hosting pipeline.

Top