Sample records for implementing standard interfaces

  1. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide, Version 8, Release 1, (Version 8.1... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide, Version 8, Release 1 (Version 8.1... Programs Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1...

  2. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard, Implementation... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...

  3. 75 FR 38026 - Medicare Program; Identification of Backward Compatible Version of Adopted Standard for E...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-01

    ... Programs (NCPDP) Prescriber/ Pharmacist Interface SCRIPT standard, Implementation Guide, Version 10... Prescriber/Pharmacist Interface SCRIPT standard, Version 8, Release 1 and its equivalent NCPDP Prescriber/Pharmacist Interface SCRIPT Implementation Guide, Version 8, Release 1 (hereinafter referred to as the...

  4. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...

  5. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...

  6. IEEE 1451.2 based Smart sensor system using ADuc847

    NASA Astrophysics Data System (ADS)

    Sreejithlal, A.; Ajith, Jose

    IEEE 1451 standard defines a standard interface for connecting transducers to microprocessor based data acquisition systems, instrumentation systems, control and field networks. Smart transducer interface module (STIM) acts as a unit which provides signal conditioning, digitization and data packet generation functions to the transducers connected to it. This paper describes the implementation of a microcontroller based smart transducer interface module based on IEEE 1451.2 standard. The module, implemented using ADuc847 microcontroller has 2 transducer channels and is programmed using Embedded C language. The Sensor system consists of a Network Controlled Application Processor (NCAP) module which controls the Smart transducer interface module (STIM) over an IEEE1451.2-RS232 bus. The NCAP module is implemented as a software module in C# language. The hardware details, control principles involved and the software implementation for the STIM are described in detail.

  7. Is There a Chance for a Standardised User Interface?

    ERIC Educational Resources Information Center

    Fletcher, Liz

    1993-01-01

    Issues concerning the implementation of standard user interfaces for CD-ROMs are discussed, including differing perceptions of the ideal interface, graphical user interfaces, user needs, and the standard protocols. It is suggested users should be able to select from a variety of user interfaces on each CD-ROM. (EA)

  8. Survey on the implementation and reliability of CubeSat electrical bus interfaces

    NASA Astrophysics Data System (ADS)

    Bouwmeester, Jasper; Langer, Martin; Gill, Eberhard

    2017-06-01

    This paper provides results and conclusions on a survey on the implementation and reliability aspects of CubeSat bus interfaces, with an emphasis on the data bus and power distribution. It provides recommendations for a future CubeSat bus standard. The survey is based on a literature study and a questionnaire representing 60 launched CubeSats and 44 to be launched CubeSats. It is found that the bus interfaces are not the main driver for mission failures. However, it is concluded that the Inter Integrated Circuit (I2C) data bus, as implemented in a great majority of the CubeSats, caused some catastrophic satellite failures and a vast amount of bus lockups. The power distribution may lead to catastrophic failures if the power lines are not protected against overcurrent. A connector and wiring standard widely implemented in CubeSats is based on the PC/104 standard. Most participants find the 104 pin connector of this standard too large. For a future CubeSat bus interface standard, it is recommended to implement a reliable data bus, a power distribution with overcurrent protection and a wiring harness with smaller connectors compared with PC/104.

  9. 45 CFR 170.299 - Incorporation by reference.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide, Version 8, Release.... (1) National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard...

  10. The development of a highly constrained health level 7 implementation guide to facilitate electronic laboratory reporting to ambulatory electronic health record systems.

    PubMed

    Sujansky, Walter V; Overhage, J Marc; Chang, Sophia; Frohlich, Jonah; Faus, Samuel A

    2009-01-01

    Electronic laboratory interfaces can significantly increase the value of ambulatory electronic health record (EHR) systems by providing laboratory result data automatically and in a computable form. However, many ambulatory EHRs cannot implement electronic laboratory interfaces despite the existence of messaging standards, such as Health Level 7, version 2 (HL7). Among several barriers to implementing laboratory interfaces is the extensive optionality within the HL7 message standard. This paper describes the rationale for and development of an HL7 implementation guide that seeks to eliminate most of the optionality inherent in HL7, but retain the information content required for reporting outpatient laboratory results. A work group of heterogeneous stakeholders developed the implementation guide based on a set of design principles that emphasized parsimony, practical requirements, and near-term adoption. The resulting implementation guide contains 93% fewer optional data elements than HL7. This guide was successfully implemented by 15 organizations during an initial testing phase and has been approved by the HL7 standards body as an implementation guide for outpatient laboratory reporting. Further testing is required to determine whether widespread adoption of the implementation guide by laboratories and EHR systems can facilitate the implementation of electronic laboratory interfaces.

  11. Space Generic Open Avionics Architecture (SGOAA) standard specification

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of a specific avionics hardware/software system. This standard defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  12. An implementation and evaluation of the MPI 3.0 one-sided communication interface

    DOE PAGES

    Dinan, James S.; Balaji, Pavan; Buntinas, Darius T.; ...

    2016-01-09

    The Q1 Message Passing Interface (MPI) 3.0 standard includes a significant revision to MPI’s remote memory access (RMA) interface, which provides support for one-sided communication. MPI-3 RMA is expected to greatly enhance the usability and performance ofMPI RMA.We present the first complete implementation of MPI-3 RMA and document implementation techniques and performance optimization opportunities enabled by the new interface. Our implementation targets messaging-based networks and is publicly available in the latest release of the MPICH MPI implementation. Here using this implementation, we explore the performance impact of new MPI-3 functionality and semantics. Results indicate that the MPI-3 RMA interface providesmore » significant advantages over the MPI-2 interface by enabling increased communication concurrency through relaxed semantics in the interface and additional routines that provide new window types, synchronization modes, and atomic operations.« less

  13. An implementation and evaluation of the MPI 3.0 one-sided communication interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinan, James S.; Balaji, Pavan; Buntinas, Darius T.

    The Q1 Message Passing Interface (MPI) 3.0 standard includes a significant revision to MPI’s remote memory access (RMA) interface, which provides support for one-sided communication. MPI-3 RMA is expected to greatly enhance the usability and performance ofMPI RMA.We present the first complete implementation of MPI-3 RMA and document implementation techniques and performance optimization opportunities enabled by the new interface. Our implementation targets messaging-based networks and is publicly available in the latest release of the MPICH MPI implementation. Here using this implementation, we explore the performance impact of new MPI-3 functionality and semantics. Results indicate that the MPI-3 RMA interface providesmore » significant advantages over the MPI-2 interface by enabling increased communication concurrency through relaxed semantics in the interface and additional routines that provide new window types, synchronization modes, and atomic operations.« less

  14. Standard interface files and procedures for reactor physics codes, version III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, B.M.

    Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)

  15. An XML-based system for the flexible classification and retrieval of clinical practice guidelines.

    PubMed Central

    Ganslandt, T.; Mueller, M. L.; Krieglstein, C. F.; Senninger, N.; Prokosch, H. U.

    2002-01-01

    Beneficial effects of clinical practice guidelines (CPGs) have not yet reached expectations due to limited routine adoption. Electronic distribution and reminder systems have the potential to overcome implementation barriers. Existing electronic CPG repositories like the National Guideline Clearinghouse (NGC) provide individual access but lack standardized computer-readable interfaces necessary for automated guideline retrieval. The aim of this paper was to facilitate automated context-based selection and presentation of CPGs. Using attributes from the NGC classification scheme, an XML-based metadata repository was successfully implemented, providing document storage, classification and retrieval functionality. Semi-automated extraction of attributes was implemented for the import of XML guideline documents using XPath. A hospital information system interface was exemplarily implemented for diagnosis-based guideline invocation. Limitations of the implemented system are discussed and possible future work is outlined. Integration of standardized computer-readable search interfaces into existing CPG repositories is proposed. PMID:12463831

  16. Design and implementation of a prototype with a standardized interface for transducers in ambient assisted living.

    PubMed

    Dorronzoro, Enrique; Gómez, Isabel; Medina, Ana Verónica; Gómez, José Antonio

    2015-01-29

    Solutions in the field of Ambient Assisted Living (AAL) do not generally use standards to implement a communication interface between sensors and actuators. This makes these applications isolated solutions because it is so difficult to integrate them into new or existing systems. The objective of this research was to design and implement a prototype with a standardized interface for sensors and actuators to facilitate the integration of different solutions in the field of AAL. Our work is based on the roadmap defined by AALIANCE, using motes with TinyOS telosb, 6LoWPAN, sensors, and the IEEE 21451 standard protocol. This prototype allows one to upgrade sensors to a smart status for easy integration with new applications and already existing ones. The prototype has been evaluated for autonomy and performance. As a use case, the prototype has been tested in a serious game previously designed for people with mobility problems, and its advantages and disadvantages have been analysed.

  17. Design and Implementation of a Prototype with a Standardized Interface for Transducers in Ambient Assisted Living

    PubMed Central

    Dorronzoro, Enrique; Gómez, Isabel; Medina, Ana Verónica; Gómez, José Antonio

    2015-01-01

    Solutions in the field of Ambient Assisted Living (AAL) do not generally use standards to implement a communication interface between sensors and actuators. This makes these applications isolated solutions because it is so difficult to integrate them into new or existing systems. The objective of this research was to design and implement a prototype with a standardized interface for sensors and actuators to facilitate the integration of different solutions in the field of AAL. Our work is based on the roadmap defined by AALIANCE, using motes with TinyOS telosb, 6LoWPAN, sensors, and the IEEE 21451 standard protocol. This prototype allows one to upgrade sensors to a smart status for easy integration with new applications and already existing ones. The prototype has been evaluated for autonomy and performance. As a use case, the prototype has been tested in a serious game previously designed for people with mobility problems, and its advantages and disadvantages have been analysed. PMID:25643057

  18. Implementing a DICOM-HL7 interface application

    NASA Astrophysics Data System (ADS)

    Fritz, Steven L.; Munjal, Sunita; Connors, James; Csipo, Deszu

    1995-05-01

    The DICOM standard, in addition to resolving certain problems with the ACR-NEMA 2.0 standard regarding network support and the clinical data dictionary, added new capabilities, in the form of study content notification and patient, study and results management services, intended to assist in interfacing between PACS and HIS or RIS systems. We have defined and implemented a mechanism that allows a DICOM application entity (AE) to interrogate an HL7 based RIS using DICOM services. The implementation involved development of a DICOM- HL7 gateway which converted between DICOM and HL7 messages to achieve the desired retrieval capability. This mechanism, based on the DICOM query/retrieve service, was used to interface a DeJarnette Research film digitizer to an IDXrad RIS at the University of Maryland Medical Systems hospital in Baltimore, Maryland. A C++ class library was developed for both DICOM and HL7 massaging, with several constructors used to convert between the two standards.

  19. The flight telerobotic servicer: From functional architecture to computer architecture

    NASA Technical Reports Server (NTRS)

    Lumia, Ronald; Fiala, John

    1989-01-01

    After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.

  20. Flexible software architecture for user-interface and machine control in laboratory automation.

    PubMed

    Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E

    1998-10-01

    We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.

  1. JPL Space Telecommunications Radio System Operating Environment

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Lang, Minh; Peters, Kenneth J.; Taylor, Gregory H.; Duncan, Courtney B.; Orozco, David S.; Stern, Ryan A.; Ahten, Earl R.; Girard, Mike

    2013-01-01

    A flight-qualified implementation of a Software Defined Radio (SDR) Operating Environment for the JPL-SDR built for the CoNNeCT Project has been developed. It is compliant with the NASA Space Telecommunications Radio System (STRS) Architecture Standard, and provides the software infrastructure for STRS compliant waveform applications. This software provides a standards-compliant abstracted view of the JPL-SDR hardware platform. It uses industry standard POSIX interfaces for most functions, as well as exposing the STRS API (Application Programming In terface) required by the standard. This software includes a standardized interface for IP components instantiated within a Xilinx FPGA (Field Programmable Gate Array). The software provides a standardized abstracted interface to platform resources such as data converters, file system, etc., which can be used by STRS standards conformant waveform applications. It provides a generic SDR operating environment with a much smaller resource footprint than similar products such as SCA (Software Communications Architecture) compliant implementations, or the DoD Joint Tactical Radio Systems (JTRS).

  2. Space Generic Open Avionics Architecture (SGOAA) standard specification

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1994-01-01

    This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  3. ooi: OpenStack OCCI interface

    NASA Astrophysics Data System (ADS)

    López García, Álvaro; Fernández del Castillo, Enol; Orviz Fernández, Pablo

    In this document we present an implementation of the Open Grid Forum's Open Cloud Computing Interface (OCCI) for OpenStack, namely ooi (Openstack occi interface, 2015) [1]. OCCI is an open standard for management tasks over cloud resources, focused on interoperability, portability and integration. ooi aims to implement this open interface for the OpenStack cloud middleware, promoting interoperability with other OCCI-enabled cloud management frameworks and infrastructures. ooi focuses on being non-invasive with a vanilla OpenStack installation, not tied to a particular OpenStack release version.

  4. SERENITY Aware Development of Security and Dependability Solutions

    NASA Astrophysics Data System (ADS)

    Serrano, Daniel; Maña, Antonio; Llarena, Rafael; Crespo, Beatriz Gallego-Nicasio; Li, Keqin

    This chapter presents an infrastructure supporting the implementation of Executable Components (ECs). ECs represent S&D solutions at the implementation level, that is, by means of pieces of executable code. ECs are instantiated by the Serenity runtime Framework (SRF) as a result of requests coming from applications. The development of ECs requires programmers having specific technical knowledge about SERENITY, since they need to implement certain interfaces of the ECs according to SERENITY standards. Every EC has to implement, the interface between the SRF and the EC itself, and the interface that the EC offers to applications.

  5. A generic archive protocol and an implementation

    NASA Technical Reports Server (NTRS)

    Jordan, J. M.; Jennings, D. G.; Mcglynn, T. A.; Ruggiero, N. G.; Serlemitsos, T. A.

    1992-01-01

    Archiving vast amounts of data has become a major part of every scientific space mission today. The Generic Archive/Retrieval Services Protocol (GRASP) addresses the question of how to archive the data collected in an environment where the underlying hardware archives may be rapidly changing. GRASP is a device independent specification defining a set of functions for storing and retrieving data from an archive, as well as other support functions. GRASP is divided into two levels: the Transfer Interface and the Action Interface. The Transfer Interface is computer/archive independent code while the Action Interface contains code which is dedicated to each archive/computer addressed. Implementations of the GRASP specification are currently available for DECstations running Ultrix, Sparcstations running SunOS, and microVAX/VAXstation 3100's. The underlying archive is assumed to function as a standard Unix or VMS file system. The code, written in C, is a single suite of files. Preprocessing commands define the machine unique code sections in the device interface. The implementation was written, to the greatest extent possible, using only ANSI standard C functions.

  6. LSST communications middleware implementation

    NASA Astrophysics Data System (ADS)

    Mills, Dave; Schumacher, German; Lotz, Paul

    2016-07-01

    The LSST communications middleware is based on a set of software abstractions; which provide standard interfaces for common communications services. The observatory requires communication between diverse subsystems, implemented by different contractors, and comprehensive archiving of subsystem status data. The Service Abstraction Layer (SAL) is implemented using open source packages that implement open standards of DDS (Data Distribution Service1) for data communication, and SQL (Standard Query Language) for database access. For every subsystem, abstractions for each of the Telemetry datastreams, along with Command/Response and Events, have been agreed with the appropriate component vendor (such as Dome, TMA, Hexapod), and captured in ICD's (Interface Control Documents).The OpenSplice (Prismtech) Community Edition of DDS provides an LGPL licensed distribution which may be freely redistributed. The availability of the full source code provides assurances that the project will be able to maintain it over the full 10 year survey, independent of the fortunes of the original providers.

  7. A Standardized Interface for Obtaining Digital Planetary and Heliophysics Time Series Data

    NASA Astrophysics Data System (ADS)

    Vandegriff, Jon; Weigel, Robert; Faden, Jeremy; King, Todd; Candey, Robert

    2016-10-01

    We describe a low level interface for accessing digital Planetary and Heliophysics data, focusing primarily on time-series data from in-situ instruments. As the volume and variety of planetary data has increased, it has become harder to merge diverse datasets into a common analysis environment. Thus we are building low-level computer-to-computer infrastructure to enable data from different missions or archives to be able to interoperate. The key to enabling interoperability is a simple access interface that standardizes the common capabilities available from any data server: 1. identify the data resources that can be accessed; 2. describe each resource; and 3. get the data from a resource. We have created a standardized way for data servers to perform each of these three activities. We are also developing a standard streaming data format for the actual data content to be returned (i.e., the result of item 3). Our proposed standard access interface is simple enough that it could be implemented on top of or beside existing data services, or it could even be fully implemented by a small data provider as a way to ensure that the provider's holdings can participate in larger data systems or joint analysis with other datasets. We present details of the interface and of the streaming format, including a sample server designed to illustrate the data request and streaming capabilities.

  8. GI-conf: A configuration tool for the GI-cat distributed catalog

    NASA Astrophysics Data System (ADS)

    Papeschi, F.; Boldrini, E.; Bigagli, L.; Mazzetti, P.

    2009-04-01

    In this work we present a configuration tool for the GI-cat. In an Service-Oriented Architecture (SOA) framework, GI-cat implements a distributed catalog service providing advanced capabilities, such as: caching, brokering and mediation functionalities. GI-cat applies a distributed approach, being able to distribute queries to the remote service providers of interest in an asynchronous style, and notifies the status of the queries to the caller implementing an incremental feedback mechanism. Today, GI-cat functionalities are made available through two standard catalog interfaces: the OGC CSW ISO and CSW Core Application Profiles. However, two other interfaces are under testing: the CIM and the EO Extension Packages of the CSW ebRIM Application Profile. GI-cat is able to interface a multiplicity of discovery and access services serving heterogeneous Earth and Space Sciences resources. They include international standards like the OGC Web Services -i.e. OGC CSW, WCS, WFS and WMS, as well as interoperability arrangements (i.e. community standards) such as: UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services, and SibESS-C infrastructure services. GI-conf implements user-friendly configuration tool for GI-cat. This is a GUI application that employs a visual and very simple approach to configure both the GI-cat publishing and distribution capabilities, in a dynamic way. The tool allows to set one or more GI-cat configurations. Each configuration consists of: a) the catalog standards interfaces published by GI-cat; b) the resources (i.e. services/servers) to be accessed and mediated -i.e. federated. Simple icons are used for interfaces and resources, implementing a user-friendly visual approach. The main GI-conf functionalities are: • Interfaces and federated resources management: user can set which interfaces must be published; besides, she/he can add a new resource, update or remove an already federated resource. • Multiple configuration management: multiple GI-cat configurations can be defined; every configuration identifies a set of published interfaces and a set of federated resources. Configurations can be edited, added, removed, exported, and even imported. • HTML report creation: an HTML report can be created, showing the current active GI-cat configuration, including the resources that are being federated and the published interface endpoints. The configuration tool is shipped with GI-cat and can be used to configure the service after its installation is completed.

  9. An Architecture for Standardized Terminology Services by Wrapping and Integration of Existing Applications

    PubMed Central

    Cornet, Ronald; Prins, Antoon K.

    2003-01-01

    Research on terminology services has resulted in development of applications and definition of standards, but has not yet led to widespread use of (standardized) terminology services in practice. Current terminology services offer functionality both for concept representation and lexical knowledge representation, hampering the possibility of combining the strengths of dedicated (concept and lexical) services. We therefore propose an extensible architecture in which concept-related and lexicon-related components are integrated and made available through a uniform interface. This interface can be extended in order to conform to existing standards, making it possible to use dedicated (third-party) components in a standardized way. As a proof of concept and a reference implementation, a SOAP-based Java implementation of the terminology service is being developed, providing wrappers for Protégé and UMLS Knowledge Source Server. Other systems, such as the Description Logic-based reasoner RACER can be easily integrated by implementation of an appropriate wrapper. PMID:14728158

  10. Study of the Alsys implementation of the Catalogue of Interface Features and Options for the Ada language for 80386 Unix

    NASA Technical Reports Server (NTRS)

    Gibson, James S.; Barnes, Michael J.; Ostermiller, Daniel L.

    1993-01-01

    A set of programs was written to test the functionality and performance of the Alsys Ada implementation of the Catalogue of Interface Features and Options (CIFO), a set of optional Ada packages for real-time applications. No problems were found with the task id, preemption control, or shared-data packages. Minor problems were found with the dispatching control, dynamic priority, events, non-waiting entry call, semaphore, and scheduling packages. The Alsys implementation is derived mostly from Release 2 of the CIFO standard, but includes some of the features of Release 3 and some modifications unique to Alsys. Performance measurements show that the semaphore and shared-data features are an order-of-magnitude faster than the same mechanisms using an Ada rendezvous. The non-waiting entry call is slightly faster than a standard rendezvous. The existence of errors in the implementation, the incompleteness of the documentation from the published standard impair the usefulness of this implementation. Despite those short-comings, the Alsys CIFO implementation might be of value in the development of real-time applications.

  11. Development of a Multi-Agent m-Health Application Based on Various Protocols for Chronic Disease Self-Management.

    PubMed

    Park, Hyun Sang; Cho, Hune; Kim, Hwa Sun

    2016-01-01

    The purpose of this study was to develop and evaluate a mobile health application (Self-Management mobile Personal Health Record: "SmPHR") to ensure the interoperability of various personal health devices (PHDs) and electronic medical record systems (EMRs) for continuous self-management of chronic disease patients. The SmPHR was developed for Android 4.0.3, and implemented according to the optimized standard protocol for each interface of healthcare services adopted by the Continua Health Alliance (CHA). That is, the Personal Area Network (PAN) interface between the application and PHD implements ISO/IEEE 11073-20,601, 10,404, 10,407, 10,415, 10,417, and Bluetooth Health Device Profile (HDP), and EMRs with a wide area network (WAN) interface implement HL7 V2.6; the Health Record Network (HRN) interface implements Continuity of Care Document (CCD) and Continuity of Care Record (CCR). Also, for SmPHR, we evaluated the transmission error rate between the interface using four PHDs and personal health record systems (PHRs) from previous research, with 611 users and elderly people after receiving institutional review board (IRB) approval. In the evaluation, the PAN interface showed 15 (2.4 %) errors, and the WAN and HRN interface showed 13 (2.1 %) errors in a total of 611 transmission attempts. Also, we received opinions regarding SmPHR from 15 healthcare professionals who took part in the clinical trial. Thus, SmPHR can be provided as an interconnected PHR mobile health service to patients, allowing 'plug and play' of PHDs and EMRs through various standard protocols.

  12. Software handlers for process interfaces

    NASA Technical Reports Server (NTRS)

    Bercaw, R. W.

    1976-01-01

    The principles involved in the development of software handlers for custom interfacing problems are discussed. Handlers for the CAMAC standard are examined in detail. The types of transactions that must be supported have been established by standards groups, eliminating conflicting requirements arising out of different design philosophies and applications. Implementation of the standard handlers has been facilititated by standardization of hardware. The necessary local processors can be placed in the handler when it is written or at run time by means of input/output directives, or they can be built into a high-performance input/output processor. The full benefits of these process interfaces will only be realized when software requirements are incorporated uniformly into the hardware.

  13. Interface Technology for Geometrically Nonlinear Analysis of Multiple Connected Subdomains

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    1997-01-01

    Interface technology for geometrically nonlinear analysis is presented and demonstrated. This technology is based on an interface element which makes use of a hybrid variational formulation to provide for compatibility between independently modeled connected subdomains. The interface element developed herein extends previous work to include geometric nonlinearity and to use standard linear and nonlinear solution procedures. Several benchmark nonlinear applications of the interface technology are presented and aspects of the implementation are discussed.

  14. International interface design for Space Station Freedom - Challenges and solutions

    NASA Technical Reports Server (NTRS)

    Mayo, Richard E.; Bolton, Gordon R.; Laurini, Daniele

    1988-01-01

    The definition of interfaces for the International Space Station is discussed, with a focus on negotiations between NASA and ESA. The program organization and division of responsibilities for the Space Station are outlined; the basic features of physical and functional interfaces are described; and particular attention is given to the interface management and documentation procedures, architectural control elements, interface implementation and verification, and examples of Columbus interface solutions (including mechanical, ECLSS, thermal-control, electrical, data-management, standardized user, and software interfaces). Diagrams, drawings, graphs, and tables listing interface types are provided.

  15. Development of the FITS tools package for multiple software environments

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; Blackburn, J. K.

    1992-01-01

    The HEASARC is developing a package of general purpose software for analyzing data files in FITS format. This paper describes the design philosophy which makes the software both machine-independent (it runs on VAXs, Suns, and DEC-stations) and software environment-independent. Currently the software can be compiled and linked to produce IRAF tasks, or alternatively, the same source code can be used to generate stand-alone tasks using one of two implementations of a user-parameter interface library. The machine independence of the software is achieved by writing the source code in ANSI standard Fortran or C, using the machine-independent FITSIO subroutine interface for all data file I/O, and using a standard user-parameter subroutine interface for all user I/O. The latter interface is based on the Fortran IRAF Parameter File interface developed at STScI. The IRAF tasks are built by linking to the IRAF implementation of this parameter interface library. Two other implementations of this parameter interface library, which have no IRAF dependencies, are now available which can be used to generate stand-alone executable tasks. These stand-alone tasks can simply be executed from the machine operating system prompt either by supplying all the task parameters on the command line or by entering the task name after which the user will be prompted for any required parameters. A first release of this FTOOLS package is now publicly available. The currently available tasks are described, along with instructions on how to obtain a copy of the software.

  16. Object Management Group object transaction service based on an X/Open and International Organization for Standardization open systems interconnection transaction processing kernel

    NASA Astrophysics Data System (ADS)

    Liang, J.; Sédillot, S.; Traverson, B.

    1997-09-01

    This paper addresses federation of a transactional object standard - Object Management Group (OMG) object transaction service (OTS) - with the X/Open distributed transaction processing (DTP) model and International Organization for Standardization (ISO) open systems interconnection (OSI) transaction processing (TP) communication protocol. The two-phase commit propagation rules within a distributed transaction tree are similar in the X/Open, ISO and OMG models. Building an OTS on an OSI TP protocol machine is possible because the two specifications are somewhat complementary. OTS defines a set of external interfaces without specific internal protocol machine, while OSI TP specifies an internal protocol machine without any application programming interface. Given these observations, and having already implemented an X/Open two-phase commit transaction toolkit based on an OSI TP protocol machine, we analyse the feasibility of using this implementation as a transaction service provider for OMG interfaces. Based on the favourable result of this feasibility study, we are implementing an OTS compliant system, which, by initiating the extensibility and openness strengths of OSI TP, is able to provide interoperability between X/Open DTP and OMG OTS models.

  17. NASA Docking System (NDS) Interface Definitions Document (IDD). Revision F, Dec. 15, 2011

    NASA Technical Reports Server (NTRS)

    Lewis, James

    2011-01-01

    The NASA Docking System (NDS) mating system supports low approach velocity docking and provides a modular and reconfigurable standard interface, supporting crewed and autonomous vehicles during mating and assembly operations. The NDS is NASA s implementation for the International Docking System Standard (IDSS) using low impact docking technology. All NDS configurations can mate with the configuration specified in the IDSS Interface Definition Document (IDD), Revision A, released May 13, 2011. The NDS evolved from the Low Impact Docking System (LIDS). The term (and its associated acronym), international Low Impact Docking System (iLIDS) is also used to describe this system. NDS and iLIDS may be used interchangeability. Some of the heritage documentation and implementations (e.g., software command names) used on the NDS will continue to use the LIDS acronym.

  18. Serial Interface through Stream Protocol on EPICS Platform for Distributed Control and Monitoring

    NASA Astrophysics Data System (ADS)

    Das Gupta, Arnab; Srivastava, Amit K.; Sunil, S.; Khan, Ziauddin

    2017-04-01

    Remote operation of any equipment or device is implemented in distributed systems in order to control and proper monitoring of process values. For such remote operations, Experimental Physics and Industrial Control System (EPICS) is used as one of the important software tool for control and monitoring of a wide range of scientific parameters. A hardware interface is developed for implementation of EPICS software so that different equipment such as data converters, power supplies, pump controllers etc. could be remotely operated through stream protocol. EPICS base was setup on windows as well as Linux operating system for control and monitoring while EPICS modules such as asyn and stream device were used to interface the equipment with standard RS-232/RS-485 protocol. Stream Device protocol communicates with the serial line with an interface to asyn drivers. Graphical user interface and alarm handling were implemented with Motif Editor and Display Manager (MEDM) and Alarm Handler (ALH) command line channel access utility tools. This paper will describe the developed application which was tested with different equipment and devices serially interfaced to the PCs on a distributed network.

  19. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. Current standards were researched and new standard interfaces were proposed. The implementation of the proposed standard interfaces on a laboratory breadboard SDR will be presented.

  20. Use of a Microprocessor to Implement an ADCCP Protocol (Federal Standard 1003).

    DTIC Science & Technology

    1980-07-01

    results of other studies, to evaluate the operational and economic impact of incorporating various options in Federal Standard 1003. The effort...the LSI interface and the microprocessor; the LSI chip deposits bytes in its buffer as the producer, and the MPU reads this data as the consumer...on the interface between the MPU and the LSI protocol chip. This requires two main processes to be running at the same time--transmit and receive. The

  1. A SAS Interface for Bayesian Analysis with WinBUGS

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; McArdle, John J.; Wang, Lijuan; Hamagami, Fumiaki

    2008-01-01

    Bayesian methods are becoming very popular despite some practical difficulties in implementation. To assist in the practical application of Bayesian methods, we show how to implement Bayesian analysis with WinBUGS as part of a standard set of SAS routines. This implementation procedure is first illustrated by fitting a multiple regression model…

  2. Design and implementation of the NPOI database and website

    NASA Astrophysics Data System (ADS)

    Newman, K.; Jorgensen, A. M.; Landavazo, M.; Sun, B.; Hutter, D. J.; Armstrong, J. T.; Mozurkewich, David; Elias, N.; van Belle, G. T.; Schmitt, H. R.; Baines, E. K.

    2014-07-01

    The Navy Precision Optical Interferometer (NPOI) has been recording astronomical observations for nearly two decades, at this point with hundreds of thousands of individual observations recorded to date for a total data volume of many terabytes. To make maximum use of the NPOI data it is necessary to organize them in an easily searchable manner and be able to extract essential diagnostic information from the data to allow users to quickly gauge data quality and suitability for a specific science investigation. This sets the motivation for creating a comprehensive database of observation metadata as well as, at least, reduced data products. The NPOI database is implemented in MySQL using standard database tools and interfaces. The use of standard database tools allows us to focus on top-level database and interface implementation and take advantage of standard features such as backup, remote access, mirroring, and complex queries which would otherwise be time-consuming to implement. A website was created in order to give scientists a user friendly interface for searching the database. It allows the user to select various metadata to search for and also allows them to decide how and what results are displayed. This streamlines the searches, making it easier and quicker for scientists to find the information they are looking for. The website has multiple browser and device support. In this paper we present the design of the NPOI database and website, and give examples of its use.

  3. Implementation of an Intelligent Control System

    DTIC Science & Technology

    1992-05-01

    there- fore implemented in a portable equipment rack. The controls computer consists of a microcomputer running a real time operating system , interface...circuit boards are mounted in an industry standard Multibus I chassis. The microcomputer runs the iRMX real time operating system . This operating system

  4. OpenMI: the essential concepts and their implications for legacy software

    NASA Astrophysics Data System (ADS)

    Gregersen, J. B.; Gijsbers, P. J. A.; Westen, S. J. P.; Blind, M.

    2005-08-01

    Information & Communication Technology (ICT) tools such as computational models are very helpful in designing river basin management plans (rbmp-s). However, in the scientific world there is consensus that a single integrated modelling system to support e.g. the implementation of the Water Framework Directive cannot be developed and that integrated systems need to be very much tailored to the local situation. As a consequence there is an urgent need to increase the flexibility of modelling systems, such that dedicated model systems can be developed from available building blocks. The HarmonIT project aims at precisely that. Its objective is to develop and implement a standard interface for modelling components and other relevant tools: The Open Modelling Interface (OpenMI) standard. The OpenMI standard has been completed and documented. It relies entirely on the "pull" principle, where data are pulled by one model from the previous model in the chain. This paper gives an overview of the OpenMI standard, explains the foremost concepts and the rational behind it.

  5. Building energy simulation coupled with CFD for indoor environment: A critical review and recent applications

    DOE PAGES

    Tian, Wei; Han, Xu; Zuo, Wangda; ...

    2018-01-31

    This paper presents a comprehensive review of the open literature on motivations, methods and applications of linking stratified airflow simulation to building energy simulation (BES). First, we reviewed the motivations for coupling prediction models for building energy and indoor environment. This review classified various exchanged data in different applications as interface data and state data, and found that choosing different data sets may lead to varying performance of stability, convergence, and speed for the co-simulation. Second, our review shows that an external coupling scheme is substantially more popular in implementations of co-simulation than an internal coupling scheme. The external couplingmore » is shown to be generally faster in computational speed, as well as easier to implement, maintain and expand than the internal coupling. Third, the external coupling can be carried out in different data synchronization schemes, including static coupling and dynamic coupling. In comparison, the static coupling that performs data exchange only once is computationally faster and more stable than the dynamic coupling. However, concerning accuracy, the dynamic coupling that requires multiple times of data exchange is more accurate than the static coupling. Furthermore, the review identified that the implementation of the external coupling can be achieved through customized interfaces, middleware, and standard interfaces. The customized interface is straightforward but may be limited to a specific coupling application. The middleware is versatile and user-friendly but usually limited in data synchronization schemes. The standard interface is versatile and promising, but may be difficult to implement. Current applications of the co-simulation are mainly energy performance evaluation and control studies. Finally, we discussed the limitations of the current research and provided an overview for future research.« less

  6. Building energy simulation coupled with CFD for indoor environment: A critical review and recent applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Wei; Han, Xu; Zuo, Wangda

    This paper presents a comprehensive review of the open literature on motivations, methods and applications of linking stratified airflow simulation to building energy simulation (BES). First, we reviewed the motivations for coupling prediction models for building energy and indoor environment. This review classified various exchanged data in different applications as interface data and state data, and found that choosing different data sets may lead to varying performance of stability, convergence, and speed for the co-simulation. Second, our review shows that an external coupling scheme is substantially more popular in implementations of co-simulation than an internal coupling scheme. The external couplingmore » is shown to be generally faster in computational speed, as well as easier to implement, maintain and expand than the internal coupling. Third, the external coupling can be carried out in different data synchronization schemes, including static coupling and dynamic coupling. In comparison, the static coupling that performs data exchange only once is computationally faster and more stable than the dynamic coupling. However, concerning accuracy, the dynamic coupling that requires multiple times of data exchange is more accurate than the static coupling. Furthermore, the review identified that the implementation of the external coupling can be achieved through customized interfaces, middleware, and standard interfaces. The customized interface is straightforward but may be limited to a specific coupling application. The middleware is versatile and user-friendly but usually limited in data synchronization schemes. The standard interface is versatile and promising, but may be difficult to implement. Current applications of the co-simulation are mainly energy performance evaluation and control studies. Finally, we discussed the limitations of the current research and provided an overview for future research.« less

  7. 3D hierarchical interface-enriched finite element method: Implementation and applications

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Ahmadian, Hossein

    2015-10-01

    A hierarchical interface-enriched finite element method (HIFEM) is proposed for the mesh-independent treatment of 3D problems with intricate morphologies. The HIFEM implements a recursive algorithm for creating enrichment functions that capture gradient discontinuities in nonconforming finite elements cut by arbitrary number and configuration of materials interfaces. The method enables the mesh-independent simulation of multiphase problems with materials interfaces that are in close proximity or contact while providing a straightforward general approach for evaluating the enrichments. In this manuscript, we present a detailed discussion on the implementation issues and required computational geometry considerations associated with the HIFEM approximation of thermal and mechanical responses of 3D problems. A convergence study is provided to investigate the accuracy and convergence rate of the HIFEM and compare them with standard FEM benchmark solutions. We will also demonstrate the application of this mesh-independent method for simulating the thermal and mechanical responses of two composite materials systems with complex microstructures.

  8. NASA Docking System (NDS) Interface Definitions Document (IDD). Revision C, Nov. 2010

    NASA Technical Reports Server (NTRS)

    2010-01-01

    The NASA Docking System (NDS) mating system supports low approach velocity docking and provides a modular and reconfigurable standard interface, supporting crewed and autonomous vehicles during mating and assembly operations. The NDS is NASA's implementation for the emerging International Docking System Standard (IDSS) using low impact docking technology. All NDS configurations can mate with the configuration specified in the IDSS Interface Definition Document (IDD) released September 21, 2010. The NDS evolved from the Low Impact Docking System (LIDS). The acronym international Low Impact Docking System (iLIDS) is also used to describe this system. NDS and iLIDS may be used interchangeability. Some of the heritage documentation and implementations (e.g., software command names) used on NDS will continue to use the LIDS acronym. The NDS IDD defines the interface characteristics and performance capability of the NDS, including uses ranging from crewed to autonomous space vehicles and from low earth orbit to deep space exploration. The responsibility for developing space vehicles and for making them technically and operationally compatible with the NDS rests with the vehicle providers. Host vehicle examples include crewed/uncrewed spacecraft, space station modules, elements, etc. Within this document, any docking space vehicle will be referred to as the host vehicle. This document defines the NDS-to-NDS interfaces, as well as the NDS-to-host vehicle interfaces and performance capability.

  9. Extending the Solvation-Layer Interface Condition Continum Electrostatic Model to a Linearized Poisson-Boltzmann Solvent.

    PubMed

    Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P

    2017-06-13

    We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.

  10. Aerospace Ground Equipment for model 4080 sequence programmer. A standard computer terminal is adapted to provide convenient operator to device interface

    NASA Technical Reports Server (NTRS)

    Nissley, L. E.

    1979-01-01

    The Aerospace Ground Equipment (AGE) provides an interface between a human operator and a complete spaceborne sequence timing device with a memory storage program. The AGE provides a means for composing, editing, syntax checking, and storing timing device programs. The AGE is implemented with a standard Hewlett-Packard 2649A terminal system and a minimum of special hardware. The terminal's dual tape interface is used to store timing device programs and to read in special AGE operating system software. To compose a new program for the timing device the keyboard is used to fill in a form displayed on the screen.

  11. Implementation of the fugitive emissions system program: The OxyChem experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deshmukh, A.

    An overview is provided for the Fugitive Emissions System (FES) that has been implemented at Occidental Chemical in conjunction with the computer-based maintenance system called PassPort{reg_sign} developed by Indus Corporation. The goal of PassPort{reg_sign} FES program has been to interface with facilities data, equipment information, work standards and work orders. Along the way, several implementation hurdles had to be overcome before a monitoring and regulatory system could be standardized for the appropriate maintenance, process and environmental groups. This presentation includes step-by-step account of several case studies that developed during the implementation of the FES system.

  12. Towards a Taxonomy of Metaphorical Graphical User Interfaces: Demands and Implementations.

    ERIC Educational Resources Information Center

    Cates, Ward Mitchell

    The graphical user interface (GUI) has become something of a standard for instructional programs in recent years. One type of GUI is the metaphorical type. For example, the Macintosh GUI is based on the "desktop" metaphor where objects one manipulates within the GUI are implied to be objects one might find in a real office's desktop.…

  13. Implementation of data acquisition interface using on-board field-programmable gate array (FPGA) universal serial bus (USB) link

    NASA Astrophysics Data System (ADS)

    Yussup, N.; Ibrahim, M. M.; Lombigit, L.; Rahman, N. A. A.; Zin, M. R. M.

    2014-02-01

    Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of data acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.

  14. Implementation of data acquisition interface using on-board field-programmable gate array (FPGA) universal serial bus (USB) link

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yussup, N.; Ibrahim, M. M.; Lombigit, L.

    Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of datamore » acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.« less

  15. Implementation of High Speed Distributed Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Raju, Anju P.; Sekhar, Ambika

    2012-09-01

    This paper introduces a high speed distributed data acquisition system based on a field programmable gate array (FPGA). The aim is to develop a "distributed" data acquisition interface. The development of instruments such as personal computers and engineering workstations based on "standard" platforms is the motivation behind this effort. Using standard platforms as the controlling unit allows independence in hardware from a particular vendor and hardware platform. The distributed approach also has advantages from a functional point of view: acquisition resources become available to multiple instruments; the acquisition front-end can be physically remote from the rest of the instrument. High speed data acquisition system transmits data faster to a remote computer system through Ethernet interface. The data is acquired through 16 analog input channels. The input data commands are multiplexed and digitized and then the data is stored in 1K buffer for each input channel. The main control unit in this design is the 16 bit processor implemented in the FPGA. This 16 bit processor is used to set up and initialize the data source and the Ethernet controller, as well as control the flow of data from the memory element to the NIC. Using this processor we can initialize and control the different configuration registers in the Ethernet controller in a easy manner. Then these data packets are sending to the remote PC through the Ethernet interface. The main advantages of the using FPGA as standard platform are its flexibility, low power consumption, short design duration, fast time to market, programmability and high density. The main advantages of using Ethernet controller AX88796 over others are its non PCI interface, the presence of embedded SRAM where transmit and reception buffers are located and high-performance SRAM-like interface. The paper introduces the implementation of the distributed data acquisition using FPGA by VHDL. The main advantages of this system are high accuracy, high speed, real time monitoring.

  16. Lessons Learned during the Development and Operation of Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Ohishi, M.; Shirasaki, Y.; Komiya, Y.; Mizumoto, Y.; Yasuda, N.; Tanaka, M.

    2010-12-01

    In the last a few years several Virtual Observatory (VO) projects have entered from the research and development phase to the operations phase. The VO projects include AstroGrid (UK), Virtual Astronomical Observatory (former National Virtual Observatory, USA), EURO-VO (EU), Japanese Virtual Observatory (Japan), and so on. This successful transition from the development phase to the operations phase owes primarily to the concerted action to develop standard interfaces among the VO projects in the world, that has been conducted in the International Virtual Observatory Alliance. The registry interface has been one of the most important key to share among the VO projects and data centers (data providers) with the observed data and the catalog data. Data access protocols and/or language (SIAP, SSAP, ADQL) and the common data format (VOTable) are other keys. Consequently we are able to find scientific papers so far published. However, we had faced some experience during the implementation process as follows:

  17. At the initial stage of the registry implementation, some fraction of the registry meta data were not correctly set, or some meta data were missing. IVOA members found that it would be needed to have validation tools to check the compliance before making the interface public;
  18. It seemed that some data centers and/or data providers might find some difficulties to implement various standardized interfaces (protocols) in order to publish their data through the VO interfaces. If there were some kind of VO interface toolkits, it would be much easier for the data centers to implement the VO interfaces; At the current VO standardization, it has not been discussed in depth on the quality assurance on the published data, or how we could provide indexes on the data quality. Such measures would be quite helpful for the data users in order to judge the data quality. It would be needed to discuss this issue not only within IVOA but with observatories and data providers;
  19. Past and current development in the VO projects have been driven from the technology side. However, since the ultimate purpose of the VOs is to accelerate getting astronomical insights from, e.g., huge amount of data or multi-wavelength data, science driven advertisement (including schools to train astronomers) would be needed;
  20. Some data centers and data providers mentioned that they need to be credited. In the Data-Centric science era it would be crucial to explicitly respect the observatories, data centers and data providers;
  21. Some suggestions to these issues are described.

  22. [Interface interconnection and data integration in implementing of digital operating room].

    PubMed

    Feng, Jingyi; Chen, Hua; Liu, Jiquan

    2011-10-01

    The digital operating-room, with highly integrated clinical information, is very important for rescuing lives of patients and improving quality of operations. Since equipments in domestic operating-rooms have diversified interface and nonstandard communication protocols, designing and implementing an integrated data sharing program for different kinds of diagnosing, monitoring, and treatment equipments become a key point in construction of digital operating room. This paper addresses interface interconnection and data integration for commonly used clinical equipments from aspects of hardware interface, interface connection and communication protocol, and offers a solution for interconnection and integration of clinical equipments in heterogeneous environment. Based on the solution, a case of an optimal digital operating-room is presented in this paper. Comparing with the international solution for digital operating-room, the solution proposed in this paper is more economical and effective. And finally, this paper provides a proposal for the platform construction of digital perating-room as well as a viewpoint for standardization of domestic clinical equipments.

  23. New tracking implementation in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Berner, Jeff B.; Bryant, Scott H.

    2001-01-01

    As part of the Network Simplification Project, the tracking system of the Deep Space Network is being upgraded. This upgrade replaces the discrete logic sequential ranging system with a system that is based on commercial Digital Signal Processor boards. The new implementation allows both sequential and pseudo-noise types of ranging. The other major change is a modernization of the data formatting. Previously, there were several types of interfaces, delivering both intermediate data and processed data (called 'observables'). All of these interfaces were bit-packed blocks, which do not allow for easy expansion, and many of these interfaces required knowledge of the specific hardware implementations. The new interface supports four classes of data: raw (direct from the measuring equipment), derived (the observable data), interferometric (multiple antenna measurements), and filtered (data whose values depend on multiple measurements). All of the measurements are reported at the sky frequency or phase level, so that no knowledge of the actual hardware is required. The data is formatted into Standard Formatted Data Units, as defined by the Consultative Committee for Space Data Systems, so that expansion and cross-center usage is greatly enhanced.

  24. Design guide for low cost standardized payloads, volume 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Concept point designs of low cost and refurbishable spacecraft, subsystems, and modules revealed payload program savings up to 50 percent. The general relationship of payload approaches to program costs; cost reductions from low cost standardized payloads; cost effective application of payload reliability, MMD, repair, and refurbishment; and implementation of standardization for future spacecraft are discussed. Shuttle interfaces and support equipment for future payloads are also considered

  25. An IEEE 1451.1 Architecture for ISHM Applications

    NASA Technical Reports Server (NTRS)

    Morris, Jon A.; Turowski, Mark; Schmalzel, John L.; Figueroa, Jorge F.

    2007-01-01

    The IEEE 1451.1 Standard for a Smart Transducer Interface defines a common network information model for connecting and managing smart elements in control and data acquisition networks using network-capable application processors (NCAPs). The Standard is a network-neutral design model that is easily ported across operating systems and physical networks for implementing complex acquisition and control applications by simply plugging in the appropriate network level drivers. To simplify configuration and tracking of transducer and actuator details, the family of 1451 standards defines a Transducer Electronic Data Sheet (TEDS) that is associated with each physical element. The TEDS contains all of the pertinent information about the physical operations of a transducer (such as operating regions, calibration tables, and manufacturer information), which the NCAP uses to configure the system to support a specific transducer. The Integrated Systems Health Management (ISHM) group at NASA's John C. Stennis Space Center (SSC) has been developing an ISHM architecture that utilizes IEEE 1451.1 as the primary configuration and data acquisition mechanism for managing and collecting information from a network of distributed intelligent sensing elements. This work has involved collaboration with other NASA centers, universities and aerospace industries to develop IEEE 1451.1 compliant sensors and interfaces tailored to support health assessment of complex systems. This paper and presentation describe the development and implementation of an interface for the configuration, management and communication of data, information and knowledge generated by a distributed system of IEEE 1451.1 intelligent elements monitoring a rocket engine test system. In this context, an intelligent element is defined as one incorporating support for the IEEE 1451.x standards and additional ISHM functions. Our implementation supports real-time collection of both measurement data (raw ADC counts and converted engineering units) and health statistics produced by each intelligent element. The handling of configuration, calibration and health information is automated by using the TEDS in combination with other electronic data sheets extensions to convey health parameters. By integrating the IEEE 1451.1 Standard for a Smart Transducer Interface with ISHM technologies, each element within a complex system becomes a highly flexible computation engine capable of self-validation and performing other measures of the quality of information it is producing.

  1. The Planetary Science Archive (PSA): Exploration and discovery of scientific datasets from ESA's planetary missions

    NASA Astrophysics Data System (ADS)

    Vallat, C.; Besse, S.; Barbarisi, I.; Arviset, C.; De Marchi, G.; Barthelemy, M.; Coia, D.; Costa, M.; Docasal, R.; Fraga, D.; Heather, D. J.; Lim, T.; Macfarlane, A.; Martinez, S.; Rios, C.; Vallejo, F.; Said, J.

    2017-09-01

    The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific datasets through various interfaces at http://psa.esa.int. All datasets are scientifically peer-reviewed by independent scientists, and are compliant with the Planetary Data System (PDS) standards. The PSA has started to implement a number of significant improvements, mostly driven by the evolution of the PDS standards, and the growing need for better interfaces and advanced applications to support science exploitation.

  2. FAILSAFE Health Management for Embedded Systems

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory A.; Wagner, David A.; Wen, Hui Ying; Barry, Matthew

    2010-01-01

    The FAILSAFE project is developing concepts and prototype implementations for software health management in mission- critical, real-time embedded systems. The project unites features of the industry-standard ARINC 653 Avionics Application Software Standard Interface and JPL s Mission Data System (MDS) technology (see figure). The ARINC 653 standard establishes requirements for the services provided by partitioned, real-time operating systems. The MDS technology provides a state analysis method, canonical architecture, and software framework that facilitates the design and implementation of software-intensive complex systems. The MDS technology has been used to provide the health management function for an ARINC 653 application implementation. In particular, the focus is on showing how this combination enables reasoning about, and recovering from, application software problems.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laros, James H.; Grant, Ryan; Levenhagen, Michael J.

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  4. Designing for scale: optimising the health information system architecture for mobile maternal health messaging in South Africa (MomConnect)

    PubMed Central

    Seebregts, Christopher; Dane, Pierre; Parsons, Annie Neo; Fogwill, Thomas; Rogers, Debbie; Bekker, Marcha; Shaw, Vincent; Barron, Peter

    2018-01-01

    MomConnect is a national initiative coordinated by the South African National Department of Health that sends text-based mobile phone messages free of charge to pregnant women who voluntarily register at any public healthcare facility in South Africa. We describe the system design and architecture of the MomConnect technical platform, planned as a nationally scalable and extensible initiative. It uses a health information exchange that can connect any standards-compliant electronic front-end application to any standards-compliant electronic back-end database. The implementation of the MomConnect technical platform, in turn, is a national reference application for electronic interoperability in line with the South African National Health Normative Standards Framework. The use of open content and messaging standards enables the architecture to include any application adhering to the selected standards. Its national implementation at scale demonstrates both the use of this technology and a key objective of global health information systems, which is to achieve implementation scale. The system’s limited clinical information, initially, allowed the architecture to focus on the base standards and profiles for interoperability in a resource-constrained environment with limited connectivity and infrastructural capacity. Maintenance of the system requires mobilisation of national resources. Future work aims to use the standard interfaces to include data from additional applications as well as to extend and interface the framework with other public health information systems in South Africa. The development of this platform has also shown the benefits of interoperability at both an organisational and technical level in South Africa. PMID:29713506

  5. Designing for scale: optimising the health information system architecture for mobile maternal health messaging in South Africa (MomConnect).

    PubMed

    Seebregts, Christopher; Dane, Pierre; Parsons, Annie Neo; Fogwill, Thomas; Rogers, Debbie; Bekker, Marcha; Shaw, Vincent; Barron, Peter

    2018-01-01

    MomConnect is a national initiative coordinated by the South African National Department of Health that sends text-based mobile phone messages free of charge to pregnant women who voluntarily register at any public healthcare facility in South Africa. We describe the system design and architecture of the MomConnect technical platform, planned as a nationally scalable and extensible initiative. It uses a health information exchange that can connect any standards-compliant electronic front-end application to any standards-compliant electronic back-end database. The implementation of the MomConnect technical platform, in turn, is a national reference application for electronic interoperability in line with the South African National Health Normative Standards Framework. The use of open content and messaging standards enables the architecture to include any application adhering to the selected standards. Its national implementation at scale demonstrates both the use of this technology and a key objective of global health information systems, which is to achieve implementation scale. The system's limited clinical information, initially, allowed the architecture to focus on the base standards and profiles for interoperability in a resource-constrained environment with limited connectivity and infrastructural capacity. Maintenance of the system requires mobilisation of national resources. Future work aims to use the standard interfaces to include data from additional applications as well as to extend and interface the framework with other public health information systems in South Africa. The development of this platform has also shown the benefits of interoperability at both an organisational and technical level in South Africa.

  6. TACS Central Control Facility.

    DTIC Science & Technology

    1981-02-12

    PULSE RTC REAL TIME CLOCK -{> I . SIGNAL INVERSION UASC UNIVERSAL ASYNCHRONOUS SERIAL - ---- 4w SPECIAL INTERFACE CONTROLLER Fiq. 2-1. MAC hardware...34 Universal Asynchronous Serial Controller" (UASC) cards. The cards implement an RS-232 standard interface. All controllers are set to operate at a data...Bridwell and I. Richer, "A Preliminary Design of a TDMA System for FLEETSAT," Technical Note 1975-5, Lincoln Laboratory, M.I.T. (12 March 1975), DDC

  7. Prototype development and implementation of picture archiving and communications systems based on ISO-OSI standard

    NASA Astrophysics Data System (ADS)

    Martinez, Ralph; Nam, Jiseung

    1992-07-01

    Picture Archiving and Communication Systems (PACS) is an integration of digital image formation in a hospital, which encompasses various imaging equipment, image viewing workstations, image databases, and a high speed network. The integration requires a standardization of communication protocols to connect devices from different vendors. The American College of Radiology and the National Electrical Manufacturers Association (ACR- NEMA) standard Version 2.0 provides a point-to-point hardware interface, a set of software commands, and a consistent set of data formats for PACS. But, it is inadequate for PACS networking environments, because of its point-to-point nature and its inflexibility to allow other services and protocols in the future. Based on previous experience of PACS developments in The University of Arizona, a new communication protocol for PACS networks and an approach were proposed to ACR-NEMA Working Group VI. The defined PACS protocol is intended to facilitate the development of PACS''s capable of interfacing with other hospital information systems. Also, it is intended to allow the creation of diagnostic information data bases which can be interrogated by a variety of distributed devices. A particularly important goal is to support communications in a multivendor environment. The new protocol specifications are defined primarily as a combination of the International Organization for Standardization/Open Systems Interconnection (ISO/OSI), TCP/IP protocols, and the data format portion of ACR-NEMA standard. This paper addresses the specification and implementation of the ISO-based protocol into a PACS prototype. The protocol specification, which covers Presentation, Session, Transport, and Network layers, is summarized briefly. The protocol implementation is discussed based on our implementation efforts in the UNIX Operating System Environment. At the same time, results of performance comparison between the ISO and TCP/IP implementations are presented to demonstrate the implementation of defined protocol. The testing of performance analysis is done by prototyping PACS on available platforms, which are Micro VAX II, DECstation and SUN Workstation.

  8. The Multimission Image Processing Laboratory's virtual frame buffer interface

    NASA Technical Reports Server (NTRS)

    Wolfe, T.

    1984-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

  9. New implementation of OGC Web Processing Service in Python programming language. PyWPS-4 and issues we are facing with processing of large raster data using OGC WPS

    NASA Astrophysics Data System (ADS)

    Čepický, Jáchym; Moreira de Sousa, Luís

    2016-06-01

    The OGC® Web Processing Service (WPS) Interface Standard provides rules for standardizing inputs and outputs (requests and responses) for geospatial processing services, such as polygon overlay. The standard also defines how a client can request the execution of a process, and how the output from the process is handled. It defines an interface that facilitates publishing of geospatial processes and client discovery of processes and and binding to those processes into workflows. Data required by a WPS can be delivered across a network or they can be available at a server. PyWPS was one of the first implementations of OGC WPS on the server side. It is written in the Python programming language and it tries to connect to all existing tools for geospatial data analysis, available on the Python platform. During the last two years, the PyWPS development team has written a new version (called PyWPS-4) completely from scratch. The analysis of large raster datasets poses several technical issues in implementing the WPS standard. The data format has to be defined and validated on the server side and binary data have to be encoded using some numeric representation. Pulling raster data from remote servers introduces security risks, in addition, running several processes in parallel has to be possible, so that system resources are used efficiently while preserving security. Here we discuss these topics and illustrate some of the solutions adopted within the PyWPS implementation.

  10. Conformance testing strategies for DICOM protocols in a heterogenous communications system

    NASA Astrophysics Data System (ADS)

    Meyer, Ralph; Hewett, Andrew J.; Cordonnier, Emmanuel; Piqueras, Joachim; Jensch, Peter F.

    1995-05-01

    The goal of the DICOM standard is to define a standard network interface and data model for imaging devices from various vendors. It shall facilitate the development and integration of information systems and picture archiving and communication systems (PACS) in a networked environment. Current activities in Oldenburg, Germany include projects to establish cooperative work applications for radiological purposes, comprising (joined) text, data, signal and image communications, based on narrowband ISDN and ATM communication for regional and Pan European applications. In such a growing and constantly changing environment it is vital to have a solid and implementable plan to bring standards in operation. A communication standard alone cannot ensure interoperability between different vendor implementations. Even DICOM does not specify implementation-specific requirements nor does it specify a testing procedure to assess an implementation's conformance to the standard. The conformance statements defined in the DICOM standard only allow a user to determine which optional components are supported by the implementation. The goal of our work is to build a conformance test suite for DICOM. Conformance testing can aid to simplify and solve problems with multivendor systems. It will check a vendor's implementation against the DICOM standard and state the found subset of functionality. The test suite will be built in respect to the ISO 9646 Standard (OSI-Conformance Testing Methodology and Framework) which is a standard devoted to the subject of conformance testing implementations of Open Systems Interconnection (OSI) standards. For our heterogeneous communication environments we must also consider ISO 9000 - 9004 (quality management and quality assurance) to give the users the confidence in evolving applications.

  11. Integrating medical devices in the operating room using service-oriented architectures.

    PubMed

    Ibach, Bastian; Benzko, Julia; Schlichting, Stefan; Zimolong, Andreas; Radermacher, Klaus

    2012-08-01

    Abstract With the increasing documentation requirements and communication capabilities of medical devices in the operating room, the integration and modular networking of these devices have become more and more important. Commercial integrated operating room systems are mainly proprietary developments using usually proprietary communication standards and interfaces, which reduce the possibility of integrating devices from different vendors. To overcome these limitations, there is a need for an open standardized architecture that is based on standard protocols and interfaces enabling the integration of devices from different vendors based on heterogeneous software and hardware components. Starting with an analysis of the requirements for device integration in the operating room and the techniques used for integrating devices in other industrial domains, a new concept for an integration architecture for the operating room based on the paradigm of a service-oriented architecture is developed. Standardized communication protocols and interface descriptions are used. As risk management is an important factor in the field of medical engineering, a risk analysis of the developed concept has been carried out and the first prototypes have been implemented.

  12. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    NASA Astrophysics Data System (ADS)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal conversion processing steps, major improvement in video noise reduction, and an added capability to pass audio/embedded digital data within the digital video signal stream are the significant performance increases associated with the incorporation of digital video interface standards. By analyzing the historical progression of military CMS developments, establishing a systems engineering process for CMS design, tracing the commercial evolution of video signal standardization, adopting commercial video signal terminology/definitions, and comparing/contrasting CMS architecture modifications using digital video interfaces; this paper provides a technical explanation on how a systems engineering process approach to video interface standardization can result in extendible and affordable cockpit management systems.

  13. Software design and implementation concepts for an interoperable medical communication framework.

    PubMed

    Besting, Andreas; Bürger, Sebastian; Kasparick, Martin; Strathen, Benjamin; Portheine, Frank

    2018-02-23

    The new IEEE 11073 service-oriented device connectivity (SDC) standard proposals for networked point-of-care and surgical devices constitutes the basis for improved interoperability due to its independence of vendors. To accelerate the distribution of the standard a reference implementation is indispensable. However, the implementation of such a framework has to overcome several non-trivial challenges. First, the high level of complexity of the underlying standard must be reflected in the software design. An efficient implementation has to consider the limited resources of the underlying hardware. Moreover, the frameworks purpose of realizing a distributed system demands a high degree of reliability of the framework itself and its internal mechanisms. Additionally, a framework must provide an easy-to-use and fail-safe application programming interface (API). In this work, we address these challenges by discussing suitable software engineering principles and practical coding guidelines. A descriptive model is developed that identifies key strategies. General feasibility is shown by outlining environments in which our implementation has been utilized.

  14. DDS as middleware of the Southern African Large Telescope control system

    NASA Astrophysics Data System (ADS)

    Maartens, Deneys S.; Brink, Janus D.

    2016-07-01

    The Southern African Large Telescope (SALT) software control system1 is realised as a distributed control system, implemented predominantly in National Instruments' LabVIEW. The telescope control subsystems communicate using cyclic, state-based messages. Currently, transmitting a message is accomplished by performing an HTTP PUT request to a WebDAV directory on a centralised Apache web server, while receiving is based on polling the web server for new messages. While the method works, it presents a number of drawbacks; a scalable distributed communication solution with minimal overhead is a better fit for control systems. This paper describes our exploration of the Data Distribution Service (DDS). DDS is a formal standard specification, defined by the Object Management Group (OMG), that presents a data-centric publish-subscribe model for distributed application communication and integration. It provides an infrastructure for platform- independent many-to-many communication. A number of vendors provide implementations of the DDS standard; RTI, in particular, provides a DDS toolkit for LabVIEW. This toolkit has been evaluated against the needs of SALT, and a few deficiencies have been identified. We have developed our own implementation that interfaces LabVIEW to DDS in order to address our specific needs. Our LabVIEW DDS interface implementation is built against the RTI DDS Core component, provided by RTI under their Open Community Source licence. Our needs dictate that the interface implementation be platform independent. Since we have access to the RTI DDS Core source code, we are able to build the RTI DDS libraries for any of the platforms on which we require support. The communications functionality is based on UDP multicasting. Multicasting is an efficient communications mechanism with low overheads which avoids duplicated point-to-point transmission of data on a network where there are multiple recipients of the data. In the paper we present a performance evaluation of DDS against the current HTTP-based implementation as well as the historical DataSocket implementation. We conclude with a summary and describe future work.

  15. The OGC Sensor Web Enablement framework

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Botts, M.

    2006-12-01

    Sensor observations are at the core of natural sciences. Improvements in data-sharing technologies offer the promise of much greater utilisation of observational data. A key to this is interoperable data standards. The Open Geospatial Consortium's (OGC) Sensor Web Enablement initiative (SWE) is developing open standards for web interfaces for the discovery, exchange and processing of sensor observations, and tasking of sensor systems. The goal is to support the construction of complex sensor applications through real-time composition of service chains from standard components. The framework is based around a suite of standard interfaces, and standard encodings for the message transferred between services. The SWE interfaces include: Sensor Observation Service (SOS)-parameterized observation requests (by observation time, feature of interest, property, sensor); Sensor Planning Service (SPS)-tasking a sensor- system to undertake future observations; Sensor Alert Service (SAS)-subscription to an alert, usually triggered by a sensor result exceeding some value. The interface design generally follows the pattern established in the OGC Web Map Service (WMS) and Web Feature Service (WFS) interfaces, where the interaction between a client and service follows a standard sequence of requests and responses. The first obtains a general description of the service capabilities, followed by obtaining detail required to formulate a data request, and finally a request for a data instance or stream. These may be implemented in a stateless "REST" idiom, or using conventional "web-services" (SOAP) messaging. In a deployed system, the SWE interfaces are supplemented by Catalogue, data (WFS) and portrayal (WMS) services, as well as authentication and rights management. The standard SWE data formats are Observations and Measurements (O&M) which encodes observation metadata and results, Sensor Model Language (SensorML) which describes sensor-systems, Transducer Model Language (TML) which covers low-level data streams, and domain-specific GML Application Schemas for definitions of the target feature types. The SWE framework has been demonstrated in several interoperability testbeds. These were based around emergency management, security, contamination and environmental monitoring scenarios.

  16. The Formal Specification of a Visual display Device: Design and Implementation.

    DTIC Science & Technology

    1985-06-01

    The use of these data structures with their defined operations, give the programmer a very powerful instructions set. Like the DPU code generator in...which any AM hosted machine could faithfully display. 27 In- general , most applications have no need to create images from a data structure representing...formation of standard functional interfaces to these resources. OS’s generally do not provide a functional interface to either the processor or the display2

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Profile Interface Generator (PIG) is a tool for loosely coupling applications and performance tools. It enables applications to write code that looks like standard C and Fortran functions calls, without requiring that applications link to specific implementations of those function calls. Performance tools can register with PIG in order to listen to only the calls that give information they care about. This interface reduces the build and configuration burden on application developers and allows semantic instrumentation to live in production codes without interfering with production runs.

  18. Space Generic Open Avionics Architecture (SGOAA) reference model technical guide

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  19. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  20. CCSDS SOIS Subnetwork Services: A First Reference Implementation

    NASA Astrophysics Data System (ADS)

    Gunes-Lasnet, S.; Notebaert, O.; Farges, P.-Y.; Fowell, S.

    2008-08-01

    The CCSDS SOIS working groups are developing a range of standards for spacecraft onboard interfaces with the intention of promoting reuse of hardware and software designs across a range of missions while enabling interoperability of onboard systems from diverse sources. The CCSDS SOIS working groups released in June 2007 their red books for both Subnetwork and application support layers. In order to allow the verification of these recommended standards and to pave the way for future implementation onboard spacecrafts, it is essential for these standards to be prototyped on a representative spacecraft platform, to provide valuable feed back to the SOIS working group. A first reference implementation of both Subnetwork and Application Support SOIS services over SpaceWire and Mil-Std-1553 bus is thus being realised by SciSys Ltd and Astrium under an ESA contract.

  1. The Johnson Space Center management information systems: User's guide to JSCMIS

    NASA Technical Reports Server (NTRS)

    Bishop, Peter C.; Erickson, Lloyd

    1990-01-01

    The Johnson Space Center Management Information System (JSCMIS) is an interface to computer data bases at the NASA Johnson Space Center which allows an authorized user to browse and retrieve information from a variety of sources with minimum effort. The User's Guide to JSCMIS is the supplement to the JSCMIS Research Report which details the objectives, the architecture, and implementation of the interface. It is a tutorial on how to use the interface and a reference for details about it. The guide is structured like an extended JSCMIS session, describing all of the interface features and how to use them. It also contains an appendix with each of the standard FORMATs currently included in the interface. Users may review them to decide which FORMAT most suits their needs.

  2. 2008 Year in Review

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge Fernando

    2008-01-01

    In February of 2008; NASA Stennis Space Center (SSC), NASA Kennedy Space Center (KSC), and The Applied Research Laboratory at Penn State University demonstrated a pilot implementation of an Integrated System Health Management (ISHM) capability at the Launch Complex 20 of KSC. The following significant accomplishments are associated with this development: (1) implementation of an architecture for ground operations ISHM, based on networked intelligent elements; (2) Use of standards for management of data, information, and knowledge (DIaK) leading to modular ISHM implementation with interoperable elements communicating according to standards (three standards were used: IEEE 1451 family of standards for smart sensors and actuators, Open Systems Architecture for Condition Based Maintenance (OSA-CBM) standard for communicating DIaK describing the condition of elements of a system, and the OPC standard for communicating data); (3) ISHM implementation using interoperable modules addressing health management of subsystems; and (4) use of a physical intelligent sensor node (smart network element or SNE capable of providing data and health) along with classic sensors originally installed in the facility. An operational demonstration included detection of anomalies (sensor failures, leaks, etc.), determination of causes and effects, communication among health nodes, and user interfaces.

  3. Building energy simulation in real time through an open standard interface

    DOE PAGES

    Pang, Xiufeng; Nouidui, Thierry S.; Wetter, Michael; ...

    2015-10-20

    Building energy models (BEMs) are typically used for design and code compliance for new buildings and in the renovation of existing buildings to predict energy use. We present the increasing adoption of BEM as standard practice in the building industry presents an opportunity to extend the use of BEMs into construction, commissioning and operation. In 2009, the authors developed a real-time simulation framework to execute an EnergyPlus model in real time to improve building operation. This paper reports an enhancement of that real-time energy simulation framework. The previous version only works with software tools that implement the custom co-simulation interfacemore » of the Building Controls Virtual Test Bed (BCVTB), such as EnergyPlus, Dymola and TRNSYS. The new version uses an open standard interface, the Functional Mockup Interface (FMI), to provide a generic interface to any application that supports the FMI protocol. In addition, the new version utilizes the Simple Measurement and Actuation Profile (sMAP) tool as the data acquisition system to acquire, store and present data. Lastly, this paper introduces the updated architecture of the real-time simulation framework using FMI and presents proof-of-concept demonstration results which validate the new framework.« less

  4. Improving accessibility and discovery of ESA planetary data through the new planetary science archive

    NASA Astrophysics Data System (ADS)

    Macfarlane, A. J.; Docasal, R.; Rios, C.; Barbarisi, I.; Saiz, J.; Vallejo, F.; Besse, S.; Arviset, C.; Barthelemy, M.; De Marchi, G.; Fraga, D.; Grotheer, E.; Heather, D.; Lim, T.; Martinez, S.; Vallat, C.

    2018-01-01

    The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific data sets through various interfaces at http://psa.esa.int. Mostly driven by the evolution of the PDS standards which all new ESA planetary missions shall follow and the need to update the interfaces to the archive, the PSA has undergone an important re-engineering. In order to maximise the scientific exploitation of ESA's planetary data holdings, significant improvements have been made by utilising the latest technologies and implementing widely recognised open standards. To facilitate users in handling and visualising the many products stored in the archive which have spatial data associated, the new PSA supports Geographical Information Systems (GIS) by implementing the standards approved by the Open Geospatial Consortium (OGC). The modernised PSA also attempts to increase interoperability with the international community by implementing recognised planetary science specific protocols such as the PDAP (Planetary Data Access Protocol) and EPN-TAP (EuroPlanet-Table Access Protocol). In this paper we describe some of the methods by which the archive may be accessed and present the challenges that are being faced in consolidating data sets of the older PDS3 version of the standards with the new PDS4 deliveries into a single data model mapping to ensure transparent access to the data for users and services whilst maintaining a high performance.

  5. JPIC-Rad-Hard JPEG2000 Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov

    2010-08-01

    JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.

  6. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.

  7. Performance Comparison of a Matrix Solver on a Heterogeneous Network Using Two Implementations of MPI: MPICH and LAM

    NASA Technical Reports Server (NTRS)

    Phillips, Jennifer K.

    1995-01-01

    Two of the current and most popular implementations of the Message-Passing Standard, Message Passing Interface (MPI), were contrasted: MPICH by Argonne National Laboratory, and LAM by the Ohio Supercomputer Center at Ohio State University. A parallel skyline matrix solver was adapted to be run in a heterogeneous environment using MPI. The Message-Passing Interface Forum was held in May 1994 which lead to a specification of library functions that implement the message-passing model of parallel communication. LAM, which creates it's own environment, is more robust in a highly heterogeneous network. MPICH uses the environment native to the machine architecture. While neither of these free-ware implementations provides the performance of native message-passing or vendor's implementations, MPICH begins to approach that performance on the SP-2. The machines used in this study were: IBM RS6000, 3 Sun4, SGI, and the IBM SP-2. Each machine is unique and a few machines required specific modifications during the installation. When installed correctly, both implementations worked well with only minor problems.

  8. Generic Software Architecture for Prognostics (GSAP) User Guide

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher Allen; Daigle, Matthew John; Watkins, Jason; Sankararaman, Shankar; Goebel, Kai

    2016-01-01

    The Generic Software Architecture for Prognostics (GSAP) is a framework for applying prognostics. It makes applying prognostics easier by implementing many of the common elements across prognostic applications. The standard interface enables reuse of prognostic algorithms and models across systems using the GSAP framework.

  9. Virtual Ships: NATO Standards Development and Implementation

    DTIC Science & Technology

    2009-10-01

    interfaces. Such simulations were unable to be re-used for other applications because they were too application specific and too highly customised ...provides water flow field data (including water flow induced forces and moments and added masses ) to other federates that request it.  Ship motion

  10. A standard satellite control reference model

    NASA Technical Reports Server (NTRS)

    Golden, Constance

    1994-01-01

    This paper describes a Satellite Control Reference Model that provides the basis for an approach to identify where standards would be beneficial in supporting space operations functions. The background and context for the development of the model and the approach are described. A process for using this reference model to trace top level interoperability directives to specific sets of engineering interface standards that must be implemented to meet these directives is discussed. Issues in developing a 'universal' reference model are also identified.

  11. Device interoperability and authentication for telemedical appliance based on the ISO/IEEE 11073 Personal Health Device (PHD) Standards.

    PubMed

    Caranguian, Luther Paul R; Pancho-Festin, Susan; Sison, Luis G

    2012-01-01

    In this study, we focused on the interoperability and authentication of medical devices in the context of telemedical systems. A recent standard called the ISO/IEEE 11073 Personal Health Device (X73-PHD) Standards addresses the device interoperability problem by defining common protocols for agent (medical device) and manager (appliance) interface. The X73-PHD standard however has not addressed security and authentication of medical devices which is important in establishing integrity of a telemedical system. We have designed and implemented a security policy within the X73-PHD standards. The policy will enable device authentication using Asymmetric-Key Cryptography and the RSA algorithm as the digital signature scheme. We used two approaches for performing the digital signatures: direct software implementation and use of embedded security modules (ESM). The two approaches were evaluated and compared in terms of execution time and memory requirement. For the standard 2048-bit RSA, ESM calculates digital signatures only 12% of the total time for the direct implementation. Moreover, analysis shows that ESM offers more security advantage such as secure storage of keys compared to using direct implementation. Interoperability with other systems was verified by testing the system with LNI Healthlink, a manager software that implements the X73-PHD standard. Lastly, security analysis was done and the system's response to common attacks on authentication systems was analyzed and several measures were implemented to protect the system against them.

  12. FPGA implementation of a ZigBee wireless network control interface to transmit biomedical signals

    NASA Astrophysics Data System (ADS)

    Gómez López, M. A.; Goy, C. B.; Bolognini, P. C.; Herrera, M. C.

    2011-12-01

    In recent years, cardiac hemodynamic monitors have incorporated new technologies based on wireless sensor networks which can implement different types of communication protocols. More precisely, a digital conductance catheter system recently developed adds a wireless ZigBee module (IEEE 802.15.4 standards) to transmit cardiac signals (ECG, intraventricular pressure and volume) which would allow the physicians to evaluate the patient's cardiac status in a noninvasively way. The aim of this paper is to describe a control interface, implemented in a FPGA device, to manage a ZigBee wireless network. ZigBee technology is used due to its excellent performance including simplicity, low-power consumption, short-range transmission and low cost. FPGA internal memory stores 8-bit signals with which the control interface prepares the information packets. These data were send to the ZigBee END DEVICE module that receives and transmits wirelessly to the external COORDINATOR module. Using an USB port, the COORDINATOR sends the signals to a personal computer for displaying. Each functional block of control interface was assessed by means of temporal diagrams. Three biological signals, organized in packets and converted to RS232 serial protocol, were sucessfully transmitted and displayed in a PC screen. For this purpose, a custom-made graphical software was designed using LabView.

  13. A new reference implementation of the PSICQUIC web service.

    PubMed

    del-Toro, Noemi; Dumousseau, Marine; Orchard, Sandra; Jimenez, Rafael C; Galeota, Eugenia; Launay, Guillaume; Goll, Johannes; Breuer, Karin; Ono, Keiichiro; Salwinski, Lukasz; Hermjakob, Henning

    2013-07-01

    The Proteomics Standard Initiative Common QUery InterfaCe (PSICQUIC) specification was created by the Human Proteome Organization Proteomics Standards Initiative (HUPO-PSI) to enable computational access to molecular-interaction data resources by means of a standard Web Service and query language. Currently providing >150 million binary interaction evidences from 28 servers globally, the PSICQUIC interface allows the concurrent search of multiple molecular-interaction information resources using a single query. Here, we present an extension of the PSICQUIC specification (version 1.3), which has been released to be compliant with the enhanced standards in molecular interactions. The new release also includes a new reference implementation of the PSICQUIC server available to the data providers. It offers augmented web service capabilities and improves the user experience. PSICQUIC has been running for almost 5 years, with a user base growing from only 4 data providers to 28 (April 2013) allowing access to 151 310 109 binary interactions. The power of this web service is shown in PSICQUIC View web application, an example of how to simultaneously query, browse and download results from the different PSICQUIC servers. This application is free and open to all users with no login requirement (http://www.ebi.ac.uk/Tools/webservices/psicquic/view/main.xhtml).

  14. An adaptive software defined radio design based on a standard space telecommunication radio system API

    NASA Astrophysics Data System (ADS)

    Xiong, Wenhao; Tian, Xin; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2017-05-01

    Software defined radio (SDR) has become a popular tool for the implementation and testing for communications performance. The advantage of the SDR approach includes: a re-configurable design, adaptive response to changing conditions, efficient development, and highly versatile implementation. In order to understand the benefits of SDR, the space telecommunication radio system (STRS) was proposed by NASA Glenn research center (GRC) along with the standard application program interface (API) structure. Each component of the system uses a well-defined API to communicate with other components. The benefit of standard API is to relax the platform limitation of each component for addition options. For example, the waveform generating process can support a field programmable gate array (FPGA), personal computer (PC), or an embedded system. As long as the API defines the requirements, the generated waveform selection will work with the complete system. In this paper, we demonstrate the design and development of adaptive SDR following the STRS and standard API protocol. We introduce step by step the SDR testbed system including the controlling graphic user interface (GUI), database, GNU radio hardware control, and universal software radio peripheral (USRP) tranceiving front end. In addition, a performance evaluation in shown on the effectiveness of the SDR approach for space telecommunication.

  15. A standards-based clinical information system for HIV/AIDS.

    PubMed

    Stitt, F W

    1995-01-01

    To create a clinical data repository to interface the Veteran's Administration (VA) Decentralized Hospital Computer Program (DHCP) and a departmental clinical information system for the management of HIV patients. This system supports record-keeping, decision-making, reporting, and analysis. The database development was designed to overcome two impediments to successful implementations of clinical databases: (i) lack of a standard reference data model, and; (ii) lack of a universal standard for medical concept representation. Health Level Seven (HL7) is a standard protocol that specifies the implementation of interfaces between two computer applications (sender and receiver) from different vendors or sources of electronic data exchange in the health care environment. This eliminates or substantially reduces the custom interface programming and program maintenance that would otherwise be required. HL7 defines the data to be exchanged, the timing of the interchange, and the communication of errors to the application. The formats are generic in nature and must be configured to meet the needs of the two applications involved. The standard conceptually operates at the seventh level of the ISO model for Open Systems Interconnection (OSI). The OSI simply defines the data elements that are exchanged as abstract messages, and does not prescribe the exact bit stream of the messages that flow over the network. Lower level network software developed according to the OSI model may be used to encode and decode the actual bit stream. The OSI protocols are not universally implemented and, therefore, a set of encoding rules for defining the exact representation of a message must be specified. The VA has created an HL7 module to assist DHCP applications in exchanging health care information with other applications using the HL7 protocol. The DHCP HL7 module consists of a set of utility routines and files that provide a generic interface to the HL7 protocol for all DHCP applications. The VA's DHCP core modules are in standard use at 169 hospitals, and the role of the VA system in health care delivery has been discussed elsewhere. This development was performed at the Miami VA Medical Center Special Immunology Unit, where a database was created for an HIV patient registry in 1987. Over 2,300 patient have been entered into a database that supports a problem-oriented summary of the patient's clinical record. The interface to the VA DHCP was designed and implemented to capture information from the patient treatment file, pharmacy, laboratory, radiology, and other modules. We obtained a suite of programs for implementing the HL7 encoding rules from Columbia-Presbyterian Medical Center in New York, written in ANSI C. This toolkit isolates our application programs from the details of the HL7 encoding rules, and allows them to deal with abstract messages and the programming level. While HL7 has become a standard for healthcare message exchange, SQL (Structured Query Language) is the standard for database definition, data manipulation, and query. The target database (Stitt F.W. The Problem-Oriented Medical Synopsis: a patient-centered clinical information system. Proc 17 SCAMC. 1993:88-93) provides clinical workstation functionality. Medical concepts are encoded using a preferred terminology derived from over 15 sources that include the Unified Medical Language System and SNOMed International ( Stitt F.W. The Problem-Oriented Medical Synopsis: coding, indexing, and classification sub-model. Proc 18 SCAMC, 1994: in press). The databases were modeled using the Information Engineering CASE tools, and were written using relational database utilities, including embedded SQL in C (ESQL/C). We linked ESQL/C programs to the HL7 toolkit to allow data to be inserted, deleted, or updated, under transaction control. A graphical format will be used to display the entity-rel

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayan Ghosh, Jeff Hammond

    OpenSHMEM is a community effort to unifyt and standardize the SHMEM programming model. MPI (Message Passing Interface) is a well-known community standard for parallel programming using distributed memory. The most recen t release of MPI, version 3.0, was designed in part to support programming models like SHMEM.OSHMPI is an implementation of the OpenSHMEM standard using MPI-3 for the Linux operating system. It is the first implementation of SHMEM over MPI one-sided communication and has the potential to be widely adopted due to the portability and widely availability of Linux and MPI-3. OSHMPI has been tested on a variety of systemsmore » and implementations of MPI-3, includingInfiniBand clusters using MVAPICH2 and SGI shared-memory supercomputers using MPICH. Current support is limited to Linux but may be extended to Apple OSX if there is sufficient interest. The code is opensource via https://github.com/jeffhammond/oshmpi« less

  17. IPeak: An open source tool to combine results from multiple MS/MS search engines.

    PubMed

    Wen, Bo; Du, Chaoqin; Li, Guilin; Ghali, Fawaz; Jones, Andrew R; Käll, Lukas; Xu, Shaohang; Zhou, Ruo; Ren, Zhe; Feng, Qiang; Xu, Xun; Wang, Jun

    2015-09-01

    Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) is an important technique for detecting peptides in proteomics studies. Here, we present an open source software tool, termed IPeak, a peptide identification pipeline that is designed to combine the Percolator post-processing algorithm and multi-search strategy to enhance the sensitivity of peptide identifications without compromising accuracy. IPeak provides a graphical user interface (GUI) as well as a command-line interface, which is implemented in JAVA and can work on all three major operating system platforms: Windows, Linux/Unix and OS X. IPeak has been designed to work with the mzIdentML standard from the Proteomics Standards Initiative (PSI) as an input and output, and also been fully integrated into the associated mzidLibrary project, providing access to the overall pipeline, as well as modules for calling Percolator on individual search engine result files. The integration thus enables IPeak (and Percolator) to be used in conjunction with any software packages implementing the mzIdentML data standard. IPeak is freely available and can be downloaded under an Apache 2.0 license at https://code.google.com/p/mzidentml-lib/. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Inflated speedups in parallel simulations via malloc()

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    Discrete-event simulation programs make heavy use of dynamic memory allocation in order to support simulation's very dynamic space requirements. When programming in C one is likely to use the malloc() routine. However, a parallel simulation which uses the standard Unix System V malloc() implementation may achieve an overly optimistic speedup, possibly superlinear. An alternate implementation provided on some (but not all systems) can avoid the speedup anomaly, but at the price of significantly reduced available free space. This is especially severe on most parallel architectures, which tend not to support virtual memory. It is shown how a simply implemented user-constructed interface to malloc() can both avoid artificially inflated speedups, and make efficient use of the dynamic memory space. The interface simply catches blocks on the basis of their size. The problem is demonstrated empirically, and the effectiveness of the solution is shown both empirically and analytically.

  19. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  20. MATE standardization

    NASA Astrophysics Data System (ADS)

    Farmer, R. E.

    1982-11-01

    The MATE (Modular Automatic Test Equipment) program was developed to combat the proliferation of unique, expensive ATE within the Air Force. MATE incorporates a standard management approach and a standard architecture designed to implement a cradle-to-grave approach to the acquisition of ATE and to significantly reduce the life cycle cost of weapons systems support. These standards are detailed in the MATE Guides. The MATE Guides assist both the Air Force and Industry in implementing the MATE concept, and provide the necessary tools and guidance required for successful acquisition of ATE. The guides also provide the necessary specifications for industry to build MATE-qualifiable equipment. The MATE architecture provides standards for all key interfaces of an ATE system. The MATE approach to the acquisition and management of ATE has been jointly endorsed by the commanders of Air Force Systems Command and Air Force Logistics Command as the way of doing business in the future.

  1. IRDS prototyping with applications to the representation of EA/RA models

    NASA Technical Reports Server (NTRS)

    Lekkos, Anthony A.; Greenwood, Bruce

    1988-01-01

    The requirements and system overview for the Information Resources Dictionary System (IRDS) are described. A formal design specification for a scaled down IRDS implementation compatible with the proposed FIPS IRDS standard is contained. The major design objectives for this IRDS will include a menu driven user interface, implementation of basic IRDS operations, and PC compatibility. The IRDS was implemented using Smalltalk/5 object oriented programming system and an ATT 6300 personal computer running under MS-DOS 3.1. The difficulties encountered in using Smalltalk are discussed.

  2. A portable MPI-based parallel vector template library

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.

    1995-01-01

    This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C++ by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of C or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.

  3. A Portable MPI-Based Parallel Vector Template Library

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.

    1995-01-01

    This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C + + by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of c or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.

  4. E-Standards For Mass Properties Engineering

    NASA Technical Reports Server (NTRS)

    Cerro, Jeffrey A.

    2008-01-01

    A proposal is put forth to promote the concept of a Society of Allied Weight Engineers developed voluntary consensus standard for mass properties engineering. This standard would be an e-standard, and would encompass data, data manipulation, and reporting functionality. The standard would be implemented via an open-source SAWE distribution site with full SAWE member body access. Engineering societies and global standards initiatives are progressing toward modern engineering standards, which become functioning deliverable data sets. These data sets, if properly standardized, will integrate easily between supplier and customer enabling technically precise mass properties data exchange. The concepts of object-oriented programming support all of these requirements, and the use of a JavaTx based open-source development initiative is proposed. Results are reported for activity sponsored by the NASA Langley Research Center Innovation Institute to scope out requirements for developing a mass properties engineering e-standard. An initial software distribution is proposed. Upon completion, an open-source application programming interface will be available to SAWE members for the development of more specific programming requirements that are tailored to company and project requirements. A fully functioning application programming interface will permit code extension via company proprietary techniques, as well as through continued open-source initiatives.

  5. STRS Compliant FPGA Waveform Development

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer; Downey, Joseph; Mortensen, Dale

    2008-01-01

    The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. The extension of STRS to the SSP hardware will promote easier waveform reconfiguration and reuse. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. A FPGA-based transmit waveform implementation of the proposed standard interfaces on a laboratory breadboard SDR will be discussed.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laros III, James H.; DeBonis, David; Grant, Ryan

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover themore » entire software space, from generic hardware interfaces to the input from the computer facility manager.« less

  7. Use of force feedback to enhance graphical user interfaces

    NASA Astrophysics Data System (ADS)

    Rosenberg, Louis B.; Brave, Scott

    1996-04-01

    This project focuses on the use of force feedback sensations to enhance user interaction with standard graphical user interface paradigms. While typical joystick and mouse devices are input-only, force feedback controllers allow physical sensations to be reflected to a user. Tasks that require users to position a cursor on a given target can be enhanced by applying physical forces to the user that aid in targeting. For example, an attractive force field implemented at the location of a graphical icon can greatly facilitate target acquisition and selection of the icon. It has been shown that force feedback can enhance a users ability to perform basic functions within graphical user interfaces.

  8. Implementation of a tactical voice/data network over FDDI. [Fiber Distributed Data Interface

    NASA Technical Reports Server (NTRS)

    Bergman, L. A.; Halloran, F.; Martinez, J.

    1988-01-01

    An asynchronous high-speed fiber-optic local-area network is described that simultaneously supports packet data traffic with synchronous TI voice traffic over a standard asynchronous FDDI (fiber distributed data interface) token-ring channel. A voice interface module was developed that parses, buffers, and resynchronizes the voice data to the packet network. The technique is general, however, and can be applied to any deterministic class of networks, including multitier backbones. In addition, the higher layer packet data protocols may operate independently of those for the voice, thereby permitting great flexibility in reconfiguring the network. Voice call setup and switching functions are performed external to the network with PABX equipment.

  9. Design criteria for a PC-based common user interface to remote information systems

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Hall, Philip P.

    1984-01-01

    A set of design criteria are presented which will allow the implementation of an interface to multiple remote information systems on a microcomputer. The focus of the design description is on providing the user with the functionality required to retrieve, store and manipulate data residing in remote information systems through the utilization of a standardized interface system. The intent is to spare the user from learning the details of retrieval from specific systems while retaining the full capabilities of each system. The system design includes multi-level capabilities to enhance usability by a wide range of users and utilizes microcomputer graphics capabilities where applicable. A data collection subsystem for evaluation purposes is also described.

  10. The Open Perimetry Interface: an enabling tool for clinical visual psychophysics.

    PubMed

    Turpin, Andrew; Artes, Paul H; McKendrick, Allison M

    2012-01-01

    Perimeters are commercially available instruments for measuring various attributes of the visual field in a clinical setting. They have several advantages over traditional lab-based systems for conducting vision experiments, including built-in gaze tracking and calibration, polished appearance, and attributes to increase participant comfort. Prior to this work, there was no standard to control such instruments, making it difficult and time consuming to use them for novel psychophysical experiments. This paper introduces the Open Perimetry Interface (OPI), a standard set of functions that can be used to control perimeters. Currently the standard is partially implemented in the open-source programming language R on two commercially available instruments: the Octopus 900 (a projection-based bowl perimeter produced by Haag-Streit, Switzerland) and the Heidelberg Edge Perimeter (a CRT-based system produced by Heidelberg Engineering, Germany), allowing these instruments to be used as a platform for psychophysical experimentation.

  11. Current Status of VO Compliant Data Service in Japanese Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Shirasaki, Y.; Komiya, Y.; Ohishi, M.; Mizumoto, Y.; Ishihara, Y.; Tsutsumi, J.; Hiyama, T.; Nakamoto, H.; Sakamoto, M.

    2012-09-01

    In these years, standards to build a Virtual Observatory (VO) data service have been established with the efforts in the International Virtual Observatory Alliance (IVOA). We applied these newly established standards (SSAP, TAP) to our VO service toolkit which was developed to implement earlier VO standards SIAP and (deprecated) SkyNode. The toolkit can be easily installed and provides a GUI interface to construct and manage VO service. In this paper, we describes the architecture of our toolkit and how it is used to start hosting VO service.

  12. EEG Recording and Online Signal Processing on Android: A Multiapp Framework for Brain-Computer Interfaces on Smartphone

    PubMed Central

    Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G.

    2017-01-01

    Objective Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. Approach In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. Main Results We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. Significance We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms. PMID:29349070

  13. EEG Recording and Online Signal Processing on Android: A Multiapp Framework for Brain-Computer Interfaces on Smartphone.

    PubMed

    Blum, Sarah; Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G

    2017-01-01

    Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms.

  14. Digital Intermediate Frequency Receiver Module For Use In Airborne Sar Applications

    DOEpatents

    Tise, Bertice L.; Dubbert, Dale F.

    2005-03-08

    A digital IF receiver (DRX) module directly compatible with advanced radar systems such as synthetic aperture radar (SAR) systems. The DRX can combine a 1 G-Sample/sec 8-bit ADC with high-speed digital signal processor, such as high gate-count FPGA technology or ASICs to realize a wideband IF receiver. DSP operations implemented in the DRX can include quadrature demodulation and multi-rate, variable-bandwidth IF filtering. Pulse-to-pulse (Doppler domain) filtering can also be implemented in the form of a presummer (accumulator) and an azimuth prefilter. An out of band noise source can be employed to provide a dither signal to the ADC, and later be removed by digital signal processing. Both the range and Doppler domain filtering operations can be implemented using a unique pane architecture which allows on-the-fly selection of the filter decimation factor, and hence, the filter bandwidth. The DRX module can include a standard VME-64 interface for control, status, and programming. An interface can provide phase history data to the real-time image formation processors. A third front-panel data port (FPDP) interface can send wide bandwidth, raw phase histories to a real-time phase history recorder for ground processing.

  15. Health care transition in Germany – standardization of procedures and improvement actions

    PubMed Central

    Pieper, Claudia; Kolankowska, Izabela

    2011-01-01

    Previous studies have assessed an increase in the number of people in need and emphasized the advantages of structured discharge management and health care transition. Therefore, our study evaluated the status quo of transition in a major German city after standardization of procedures and implementation of standard forms. Satisfaction with handling of standard forms and improvement of procedures was evaluated. Additionally, patients who had recently been hospitalized were asked about the hospital discharge process. The results show that the recent efforts of standardization helped to improve interface management for health care workers and patients and showed further improvement options. PMID:21811388

  16. A non-invasive implementation of a mixed domain decomposition method for frictional contact problems

    NASA Astrophysics Data System (ADS)

    Oumaziz, Paul; Gosselet, Pierre; Boucard, Pierre-Alain; Guinard, Stéphane

    2017-11-01

    A non-invasive implementation of the Latin domain decomposition method for frictional contact problems is described. The formulation implies to deal with mixed (Robin) conditions on the faces of the subdomains, which is not a classical feature of commercial software. Therefore we propose a new implementation of the linear stage of the Latin method with a non-local search direction built as the stiffness of a layer of elements on the interfaces. This choice enables us to implement the method within the open source software Code_Aster, and to derive 2D and 3D examples with similar performance as the standard Latin method.

  17. Interfacing the PACS and the HIS: results of a 5-year implementation.

    PubMed

    Kinsey, T V; Horton, M C; Lewis, T E

    2000-01-01

    An interface was created between the Department of Defense's hospital information system (HIS) and its two picture archiving and communication system (PACS)-based radiology information systems (RISs). The HIS is called the Composite Healthcare Computer System (CHCS), and the RISs are called the Medical Diagnostic Imaging System (MDIS) and the Digital Imaging Network (DIN)-PACS. Extensive mapping between dissimilar data protocols was required to translate data from the HIS into both RISs. The CHCS uses a Health Level 7 (HL7) protocol, whereas the MDIS uses the American College of Radiology-National Electrical Manufacturers Association 2.0 protocol and the DIN-PACS uses the Digital Imaging and Communications in Medicine (DICOM) 3.0 protocol. An interface engine was required to change some data formats, as well as to address some nonstandard HL7 data being output from the CHCS. In addition, there are differences in terminology between fields and segments in all three protocols. This interface is in use at 20 military facilities throughout the world. The interface reduces the amount of manual entry into more than one automated system to the smallest level possible. Data mapping during installation saved time, improved productivity, and increased user acceptance during PACS implementation. It also resulted in more standardized database entries in both the HIS (CHCS) and the RIS (PACS).

  18. Web service activities at the IRIS DMC to support federated and multidisciplinary access

    NASA Astrophysics Data System (ADS)

    Trabant, Chad; Ahern, Timothy K.

    2013-04-01

    At the IRIS Data Management Center (DMC) we have developed a suite of web service interfaces to access our large archive of, primarily seismological, time series data and related metadata. The goals of these web services include providing: a) next-generation and easily used access interfaces for our current users, b) access to data holdings in a form usable for non-seismologists, c) programmatic access to facilitate integration into data processing workflows and d) a foundation for participation in federated data discovery and access systems. To support our current users, our services provide access to the raw time series data and metadata or conversions of the raw data to commonly used formats. Our services also support simple, on-the-fly signal processing options that are common first steps in many workflows. Additionally, high-level data products derived from raw data are available via service interfaces. To support data access by researchers unfamiliar with seismic data we offer conversion of the data to broadly usable formats (e.g. ASCII text) and data processing to convert the data to Earth units. By their very nature, web services are programmatic interfaces. Combined with ubiquitous support for web technologies in programming & scripting languages and support in many computing environments, web services are very well suited for integrating data access into data processing workflows. As programmatic interfaces that can return data in both discipline-specific and broadly usable formats, our services are also well suited for participation in federated and brokered systems either specific to seismology or multidisciplinary. Working within the International Federation of Digital Seismograph Networks, the DMC collaborated on the specification of standardized web service interfaces for use at any seismological data center. These data access interfaces, when supported by multiple data centers, will form a foundation on which to build discovery and access mechanisms for data sets spanning multiple centers. To promote the adoption of these standardized services the DMC has developed portable implementations of the software needed to host these interfaces, minimizing the work required at each data center. Within the COOPEUS project framework, the DMC is working with EU partners to install web services implementations at multiple data centers in Europe.

  19. A UML model for the description of different brain-computer interface systems.

    PubMed

    Quitadamo, Lucia Rita; Abbafati, Manuel; Saggio, Giovanni; Marciani, Maria Grazia; Cardarilli, Gian Carlo; Bianchi, Luigi

    2008-01-01

    BCI research lacks a universal descriptive language among labs and a unique standard model for the description of BCI systems. This results in a serious problem in comparing performances of different BCI processes and in unifying tools and resources. In such a view we implemented a Unified Modeling Language (UML) model for the description virtually of any BCI protocol and we demonstrated that it can be successfully applied to the most common ones such as P300, mu-rhythms, SCP, SSVEP, fMRI. Finally we illustrated the advantages in utilizing a standard terminology for BCIs and how the same basic structure can be successfully adopted for the implementation of new systems.

  20. Fiber optic sensor based on Mach-Zehnder interferometer for securing entrance areas of buildings

    NASA Astrophysics Data System (ADS)

    Nedoma, Jan; Fajkus, Marcel; Martinek, Radek; Mec, Pavel; Novak, Martin; Bednarek, Lukas; Vasinek, Vladimir

    2017-10-01

    Authors of this article focused on the utilization of fiber optic sensors based on interferometric measurements for securing entrance areas of buildings such as windows and doors. We described the implementation of the fiber-optic interferometer (type Mach-Zehnder) into the window frame or door, sensor sensitivity, analysis of the background noise and methods of signal evaluation. The advantage of presented solution is the use of standard telecommunication fiber standard G.652.D, high sensitivity, immunity of sensor to electromagnetic interference (EMI) and passivity of the sensor regarding power supply. Authors implemented the Graphical User Interface (GUI) which offers the possibility of remote monitoring presented sensing solution.

  1. MAPA: Implementation of the Standard Interchange Format and use for analyzing lattices

    NASA Astrophysics Data System (ADS)

    Shasharina, Svetlana G.; Cary, John R.

    1997-05-01

    MAPA (Modular Accelerator Physics Analysis) is an object oriented application for accelerator design and analysis with a Motif based graphical user interface. MAPA has been ported to AIX, Linux, HPUX, Solaris, and IRIX. MAPA provides an intuitive environment for accelerator study and design. The user can bring up windows for fully nonlinear analysis of accelerator lattices in any number of dimensions. The current graphical analysis methods of Lifetime plots and Surfaces of Section have been used to analyze the improved lattice designs of Wan, Cary, and Shasharina (this conference). MAPA can now read and write Standard Interchange Format (MAD) accelerator description files and it has a general graphical user interface for adding, changing, and deleting elements. MAPA's consistency checks prevent deletion of used elements and prevent creation of recursive beam lines. Plans include development of a richer set of modeling tools and the ability to invoke existing modeling codes through the MAPA interface. MAPA will be demonstrated on a Pentium 150 laptop running Linux.

  2. 12-bit 32 channel 500 MS/s low-latency ADC for particle accelerators real-time control

    NASA Astrophysics Data System (ADS)

    Karnitski, Anton; Baranauskas, Dalius; Zelenin, Denis; Baranauskas, Gytis; Zhankevich, Alexander; Gill, Chris

    2017-09-01

    Particle beam control systems require real-time low latency digital feedback with high linearity and dynamic range. Densely packed electronic systems employ high performance multichannel digitizers causing excessive heat dissipation. Therefore, low power dissipation is another critical requirement for these digitizers. A described 12-bit 500 MS/s ADC employs a sub-ranging architecture based on a merged sample & hold circuit, a residue C-DAC and a shared 6-bit flash core ADC. The core ADC provides a sequential coarse and fine digitization featuring a latency of two clock cycles. The ADC is implemented in a 28 nm CMOS process and consumes 4 mW of power per channel from a 0.9 V supply (interfacing and peripheral circuits are excluded). Reduced power consumption and small on-chip area permits the implementation of 32 ADC channels on a 10.7 mm2 chip. The ADC includes a JESD204B standard compliant output data interface operated at the 7.5 Gbps/ch rate. To minimize the data interface related time latency, a special feature permitting to bypass the JESD204B interface is built in. DoE Phase I Award Number: DE-SC0017213.

  3. Performance analysis of a proposed tightly-coupled medical instrument network based on CAN protocol.

    PubMed

    Mujumdar, Shantanu; Thongpithoonrat, Pongnarin; Gurkan, D; McKneely, Paul K; Chapman, Frank M; Merchant, Fatima

    2010-01-01

    Advances in medical devices and health care has been phenomenal during the recent years. Although medical device manufacturers have been improving their instruments, network connection of these instruments still rely on proprietary technologies. Even if the interface has been provided by the manufacturer (e.g., RS-232, USB, or Ethernet coupled with a proprietary API), there is no widely-accepted uniform data model to access data of various bedside instruments. There is a need for a common standard which allows for internetworking with the medical devices from different manufacturers. ISO/IEEE 11073 (X73) is a standard attempting to unify the interfaces of all medical devices. X73 defines a client access mechanism that would be implemented into the communication controllers (residing between an instrument and the network) in order to access/network patient data. On the other hand, MediCAN™ technology suite has been demonstrated with various medical instruments to achieve interfacing and networking with a similar goal in its open standardization approach. However, it provides a more generic definition for medical data to achieve flexibility for networking and client access mechanisms. The instruments are in turn becoming more sophisticated; however, the operation of an instrument is still expected to be locally done by authorized medical personnel. Unfortunately, each medical instrument has its unique proprietary API (application programming interface - if any) to provide automated and electronic access to monitoring data. Integration of these APIs requires an agreement with the manufacturers towards realization of interoperable health care networking. As long as the interoperability of instruments with a network is not possible, ubiquitous access to patient status is limited only to manual entry based systems. This paper demonstrates an attempt to realize an interoperable medical instrument interface for networking using MediCAN technology suite as an open standard.

  4. Ada/POSIX binding: A focused Ada investigation

    NASA Technical Reports Server (NTRS)

    Legrand, Sue

    1988-01-01

    NASA is seeking an operating system interface definition (OSID) for the Space Station Program (SSP) in order to take advantage of the commercial off-the-shelf (COTS) products available today and the many that are expected in the future. NASA would also like to avoid the reliance on any one source for operating systems, information system, communication system, or instruction set architecture. The use of the Portable Operating System Interface for Computer Environments (POSIX) is examined as a possible solution to this problem. Since Ada is already the language of choice for SSP, the question of an Ada/POSIX binding is addressed. The intent of the binding is to provide access to the POSIX standard operation system (OS) interface and environment, by which application portability of Ada applications will be supported at the source code level. A guiding principle of Ada/POSIX binding development is a clear conformance of the Ada interface with the functional definition of POSIX. The interface is intended to be used by both application developers and system implementors. The objective is to provide a standard that allows a strictly conforming application source program that can be compiled to execute on any conforming implementation. Special emphasis is placed on first providing those functions and facilities that are needed in a wide variety of commercial applications

  5. Hardware and Software Design of FPGA-based PCIe Gen3 interface for APEnet+ network interconnect system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.

    2015-12-01

    In the attempt to develop an interconnection architecture optimized for hybrid HPC systems dedicated to scientific computing, we designed APEnet+, a point-to-point, low-latency and high-performance network controller supporting 6 fully bidirectional off-board links over a 3D torus topology. The first release of APEnet+ (named V4) was a board based on a 40 nm Altera FPGA, integrating 6 channels at 34 Gbps of raw bandwidth per direction and a PCIe Gen2 x8 host interface. It has been the first-of-its-kind device to implement an RDMA protocol to directly read/write data from/to Fermi and Kepler NVIDIA GPUs using NVIDIA peer-to-peer and GPUDirect RDMA protocols, obtaining real zero-copy GPU-to-GPU transfers over the network. The latest generation of APEnet+ systems (now named V5) implements a PCIe Gen3 x8 host interface on a 28 nm Altera Stratix V FPGA, with multi-standard fast transceivers (up to 14.4 Gbps) and an increased amount of configurable internal resources and hardware IP cores to support main interconnection standard protocols. Herein we present the APEnet+ V5 architecture, the status of its hardware and its system software design. Both its Linux Device Driver and the low-level libraries have been redeveloped to support the PCIe Gen3 protocol, introducing optimizations and solutions based on hardware/software co-design.

  6. The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models

    NASA Technical Reports Server (NTRS)

    Hill, Melissa A.; Jackson, E. Bruce

    2007-01-01

    It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.

  7. Shiny FHIR: An Integrated Framework Leveraging Shiny R and HL7 FHIR to Empower Standards-Based Clinical Data Applications.

    PubMed

    Hong, Na; Prodduturi, Naresh; Wang, Chen; Jiang, Guoqian

    2017-01-01

    In this study, we describe our efforts in building a clinical statistics and analysis application platform using an emerging clinical data standard, HL7 FHIR, and an open source web application framework, Shiny. We designed two primary workflows that integrate a series of R packages to enable both patient-centered and cohort-based interactive analyses. We leveraged Shiny with R to develop interactive interfaces on FHIR-based data and used ovarian cancer study datasets as a use case to implement a prototype. Specifically, we implemented patient index, patient-centered data report and analysis, and cohort analysis. The evaluation of our study was performed by testing the adaptability of the framework on two public FHIR servers. We identify common research requirements and current outstanding issues, and discuss future enhancement work of the current studies. Overall, our study demonstrated that it is feasible to use Shiny for implementing interactive analysis on FHIR-based standardized clinical data.

  8. Medical instrument data exchange.

    PubMed

    Gumudavelli, Suman; McKneely, Paul K; Thongpithoonrat, Pongnarin; Gurkan, D; Chapman, Frank M

    2008-01-01

    Advances in medical devices and health care has been phenomenal during the recent years. Although medical device manufacturers have been improving their instruments, network connection of these instruments still rely on proprietary technologies. Even if the interface has been provided by the manufacturer (e.g., RS-232, USB, or Ethernet coupled with a proprietary API), there is no widely-accepted uniform data model to access data of various bedside instruments. There is a need for a common standard which allows for internetworking with the medical devices from different manufacturers. ISO/IEEE 11073 (X73) is a standard attempting to unify the interfaces of all medical devices. X73 defines a client access mechanism that would be implemented into the communication controllers (residing between an instrument and the network) in order to access/network patient data. On the other hand, MediCAN technology suite has been demonstrated with various medical instruments to achieve interfacing and networking with a similar goal in its open standardization approach. However, it provides a more generic definition for medical data to achieve flexibility for networking and client access mechanisms. In this paper, a comparison between the data model of X73 and MediCAN will be presented to encourage interoperability demonstrations of medical instruments.

  9. ARINC 818 adds capabilities for high-speed sensors and systems

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Grunwald, Paul

    2014-06-01

    ARINC 818, titled Avionics Digital Video Bus (ADVB), is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits including the Boeing 787, the A350XWB, the A400M, the KC- 46A and many others. Initially conceived of for cockpit displays, ARINC 818 is now propagating into high-speed sensors, such as infrared and optical cameras due to its high-bandwidth and high reliability. The ARINC 818 specification that was initially release in the 2006 and has recently undergone a major update that will enhance its applicability as a high speed sensor interface. The ARINC 818-2 specification was published in December 2013. The revisions to the specification include: video switching, stereo and 3-D provisions, color sequential implementations, regions of interest, data-only transmissions, multi-channel implementations, bi-directional communication, higher link rates to 32Gbps, synchronization signals, options for high-speed coax interfaces and optical interface details. The additions to the specification are especially appealing for high-bandwidth, multi sensor systems that have issues with throughput bottlenecks and SWaP concerns. ARINC 818 is implemented on either copper or fiber optic high speed physical layers, and allows for time multiplexing multiple sensors onto a single link. This paper discusses each of the new capabilities in the ARINC 818-2 specification and the benefits for ISR and countermeasures implementations, several examples are provided.

  10. Future Concepts for Realtime Data Interfaces for Control Centers

    NASA Technical Reports Server (NTRS)

    Kearney, Mike W., III

    2004-01-01

    Existing methods of exchanging realtime data between the major control centers in the International Space Station program have resulted in a patchwork of local formats being imposed on each Mission Control Center. This puts the burden on a data customer to comply with the proprietary data formats of each data supplier. This has increased the cost and complexity for each participant, limited access to mission data and hampered the development of efficient and flexible operations concepts. Ideally, a universal format should be promoted in the industry to prevent the unnecessary burden of each center processing a different data format standard for every external interface with another center. With the broad acceptance of XML and other conventions used in other industries, it is now time for the Aerospace industry to fully engage and establish such a standard. This paper will briefly consider the components that would be required by such a standard (XML schema, data dictionaries, etc.) in order to accomplish the goal of a universal low-cost interface, and acquire broad industry acceptance. We will then examine current approaches being developed by standards bodies and other groups. The current state of CCSDS panel work will be reviewed, with a survey of the degree of industry acceptance. Other widely accepted commercial approaches will be considered, sometimes complimentary to the standards work, but sometimes not. The question is whether de facto industry standards are in concert with, or in conflict with the direction of the standards bodies. And given that state of affairs, the author will consider whether a new program establishing its Mission Control Center should implement a data interface based on those standards. The author proposes that broad industry support to unify the various efforts will enable collaboration between control centers and space programs to a wider degree than is currently available. This will reduce the cost for programs to provide realtime access to their data, hence reducing the cost of access to space, and benefiting the industry as a whole.

  11. A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm.

    PubMed

    Dethier, Julie; Nuyujukian, Paul; Eliasmith, Chris; Stewart, Terry; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena

    2011-01-01

    Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm's velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.

  12. Guide for Operational Configuration Management Program including the adjunct programs of design reconstitution and material condition and aging management. Part 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This standard presents program criteria and implementation guidance for an operational configuration management program for DOE nuclear and non-nuclear facilities in the operational phase. Portions of this standard are also useful for other DOE processes, activities, and programs. This Part 1 contains foreword, glossary, acronyms, bibliography, and Chapter 1 on operational configuration management program principles. Appendices are included on configuration management program interfaces, and background material and concepts for operational configuration management.

  13. STS payloads mission control study continuation phase A-1. Volume 2-B: Task 2. Evaluation and refinement of implementation guidelines for the selected STS payload operator concept

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The functions of Payload Operations Control Centers (POCC) at JSC, GSFC, JPL, and non-NASA locations are analyzed to establish guidelines for standardization, and facilitate the development of a fully integrated NASA-wide system of ground facilities for all classes of payloads. Operational interfaces between the space transportation system operator and the payload operator elements are defined. The advantages and disadvantages of standardization are discussed.

  14. Adopting Industry Standards for Control Systems Within Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Young, James Scott; Boulanger, Richard

    2002-01-01

    This paper gives a description of OPC (Object Linking and Embedding for Process Control) standards for process control and outlines the experiences at JSC with using these standards to interface with I/O hardware from three independent vendors. The I/O hardware was integrated with a commercially available SCADA/HMI software package to make up the control and monitoring system for the Environmental Systems Test Stand (ESTS). OPC standards were utilized for communicating with I/O hardware and the software was used for implementing monitoring, PC-based distributed control, and redundant data storage over an Ethernet physical layer using an embedded din-rail mounted PC.

  15. Use of small stand-alone Internet nodes as a distributed control system

    NASA Astrophysics Data System (ADS)

    Goodwin, Robert W.; Kucera, Michael J.; Shea, Michael F.

    1994-12-01

    For several years, the standard model for accelerator control systems has been workstation consoles connected to VME local stations by a Local Area Network with analog and digital data being accessed via a field bus to custom I/O interface electronics. Commercially available hardware has now made it possible to implement a small stand-alone data acquisition station that combines the LAN connection, the computer, and the analog and digital I/O interface on a single board. This eliminates the complexity of a field bus and the associated proprietary I/O hardware. A minimum control system is one data acquisition station and a Macintosh or workstation console, both connected to the network; larger systems have more consoles and nodes. An implementation of this architecture is described along with performance and operational experience.

  16. Open Technology Approaches to Geospatial Interface Design

    NASA Astrophysics Data System (ADS)

    Crevensten, B.; Simmons, D.; Alaska Satellite Facility

    2011-12-01

    What problems do you not want your software developers to be solving? Choosing open technologies across the entire stack of software development-from low-level shared libraries to high-level user interaction implementations-is a way to help ensure that customized software yields innovative and valuable tools for Earth Scientists. This demonstration will review developments in web application technologies and the recurring patterns of interaction design regarding exploration and discovery of geospatial data through the Vertex: ASF's Dataportal interface, a project utilizing current open web application standards and technologies including HTML5, jQueryUI, Backbone.js and the Jasmine unit testing framework.

  17. Assessment of the Orion-SLS Interface Management Process in Achieving the EIA 731.1 Systems Engineering Capability Model Generic Practices Level 3 Criteria

    NASA Technical Reports Server (NTRS)

    Jellicorse, John J.; Rahman, Shamin A.

    2016-01-01

    NASA is currently developing the next generation crewed spacecraft and launch vehicle for exploration beyond earth orbit including returning to the Moon and making the transit to Mars. Managing the design integration of major hardware elements of a space transportation system is critical for overcoming both the technical and programmatic challenges in taking a complex system from concept to space operations. An established method of accomplishing this is formal interface management. In this paper we set forth an argument that the interface management process implemented by NASA between the Orion Multi-Purpose Crew Vehicle (MPCV) and the Space Launch System (SLS) achieves the Level 3 tier of the EIA 731.1 System Engineering Capability Model (SECM) for Generic Practices. We describe the relevant NASA systems and associated organizations, and define the EIA SECM Level 3 Generic Practices. We then provide evidence for our compliance with those practices. This evidence includes discussions of: NASA Systems Engineering Interface (SE) Management standard process and best practices; the tailoring of that process for implementation on the Orion to SLS interface; changes made over time to improve the tailored process, and; the opportunities to take the resulting lessons learned and propose improvements to our institutional processes and best practices. We compare this evidence against the practices to form the rationale for the declared SECM maturity level.

  18. Information System through ANIS at CeSAM

    NASA Astrophysics Data System (ADS)

    Moreau, C.; Agneray, F.; Gimenez, S.

    2015-09-01

    ANIS (AstroNomical Information System) is a web generic tool developed at CeSAM to facilitate and standardize the implementation of astronomical data of various kinds through private and/or public dedicated Information Systems. The architecture of ANIS is composed of a database server which contains the project data, a web user interface template which provides high level services (search, extract and display imaging and spectroscopic data using a combination of criteria, an object list, a sql query module or a cone search interfaces), a framework composed of several packages, and a metadata database managed by a web administration entity. The process to implement a new ANIS instance at CeSAM is easy and fast : the scientific project has to submit data or a data secure access, the CeSAM team installs the new instance (web interface template and the metadata database), and the project administrator can configure the instance with the web ANIS-administration entity. Currently, the CeSAM offers through ANIS a web access to VO compliant Information Systems for different projects (HeDaM, HST-COSMOS, CFHTLS-ZPhots, ExoDAT,...).

  19. An object oriented fully 3D tomography visual toolkit.

    PubMed

    Agostinelli, S; Paoli, G

    2001-04-01

    In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.

  20. Wearable system-on-a-chip UWB radar for health care and its application to the safety improvement of emergency operators.

    PubMed

    Zito, Domenico; Pepe, Domenico; Neri, Bruno; De Rossi, Danilo; Lanatà, Antonio; Tognetti, Alessandro; Scilingo, Enzo Pasquale

    2007-01-01

    A new wearable system-on-a-chip UWB radar for health care systems is presented. The idea and its applications to the safety improvement of emergency operators are discussed. The system consists of a wearable wireless interface including a fully integrated UWB radar for the detection of the heart beat and breath rates, and a IEEE 802.15.4 ZigBee radio interface. The principle of operation of the UWB radar for the monitoring of the heart wall is explained hereinafter. The results obtained by the feasibility study regarding its implementation on a modern standard silicon technology (CMOS 90 nm) are reported, demonstrating (at simulation level) the effectiveness of such an approach and enabling the standard silicon technology for new generations of wireless sensors for heath care and safeguard wearable systems.

  1. A component-based, distributed object services architecture for a clinical workstation.

    PubMed

    Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.

  2. A component-based, distributed object services architecture for a clinical workstation.

    PubMed Central

    Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744

  3. JAliEn - A new interface between the AliEn jobs and the central services

    NASA Astrophysics Data System (ADS)

    Grigoras, A. G.; Grigoras, C.; Pedreira, M. M.; Saiz, P.; Schreiner, S.

    2014-06-01

    Since the ALICE experiment began data taking in early 2010, the amount of end user jobs on the AliEn Grid has increased significantly. Presently 1/3 of the 40K CPU cores available to ALICE are occupied by jobs submitted by about 400 distinct users, individually or in organized analysis trains. The overall stability of the AliEn middleware has been excellent throughout the 3 years of running, but the massive amount of end-user analysis and its specific requirements and load has revealed few components which can be improved. One of them is the interface between users and central AliEn services (catalogue, job submission system) which we are currently re-implementing in Java. The interface provides persistent connection with enhanced data and job submission authenticity. In this paper we will describe the architecture of the new interface, the ROOT binding which enables the use of a single interface in addition to the standard UNIX-like access shell and the new security-related features.

  4. Interoperability at ESA Heliophysics Science Archives: IVOA, HAPI and other implementations

    NASA Astrophysics Data System (ADS)

    Martinez-Garcia, B.; Cook, J. P.; Perez, H.; Fernandez, M.; De Teodoro, P.; Osuna, P.; Arnaud, M.; Arviset, C.

    2017-12-01

    The data of ESA heliophysics science missions are preserved at the ESAC Science Data Centre (ESDC). The ESDC aims for the long term preservation of those data, which includes missions such as Ulysses, Soho, Proba-2, Cluster, Double Star, and in the future, Solar Orbiter. Scientists have access to these data through web services, command line and graphical user interfaces for each of the corresponding science mission archives. The International Virtual Observatory Alliance (IVOA) provides technical standards that allow interoperability among different systems that implement them. By adopting some IVOA standards, the ESA heliophysics archives are able to share their data with those tools and services that are VO-compatible. Implementation of those standards can be found in the existing archives: Ulysses Final Archive (UFA) and Soho Science Archive (SSA). They already make use of VOTable format definition and Simple Application Messaging Protocol (SAMP). For re-engineered or new archives, the implementation of services through Table Access Protocol (TAP) or Universal Worker Service (UWS) will leverage this interoperability. This will be the case for the Proba-2 Science Archive (P2SA) and the Solar Orbiter Archive (SOAR). We present here which IVOA standards were already used by the ESA Heliophysics archives in the past and the work on-going.

  5. jmzTab: a java interface to the mzTab data standard.

    PubMed

    Xu, Qing-Wei; Griss, Johannes; Wang, Rui; Jones, Andrew R; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2014-06-01

    mzTab is the most recent standard format developed by the Proteomics Standards Initiative. mzTab is a flexible tab-delimited file that can capture identification and quantification results coming from MS-based proteomics and metabolomics approaches. We here present an open-source Java application programming interface for mzTab called jmzTab. The software allows the efficient processing of mzTab files, providing read and write capabilities, and is designed to be embedded in other software packages. The second key feature of the jmzTab model is that it provides a flexible framework to maintain the logical integrity between the metadata and the table-based sections in the mzTab files. In this article, as two example implementations, we also describe two stand-alone tools that can be used to validate mzTab files and to convert PRIDE XML files to mzTab. The library is freely available at http://mztab.googlecode.com. © 2014 The Authors PROTEOMICS Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Availability of the OGC geoprocessing standard: March 2011 reality check

    NASA Astrophysics Data System (ADS)

    Lopez-Pellicer, Francisco J.; Rentería-Agualimpia, Walter; Béjar, Rubén; Muro-Medrano, Pedro R.; Zarazaga-Soria, F. Javier

    2012-10-01

    This paper presents an investigation about the servers available in March 2011 conforming to the Web Processing Service interface specification published by the geospatial standards organization Open Geospatial Consortium (OGC) in 2007. This interface specification gives support to standard Web-based geoprocessing. The data used in this research were collected using a focused crawler configured for finding OGC Web services. The research goals are (i) to provide a reality check of the availability of Web Processing Service servers, (ii) to provide quantitative data about the use of different features defined in the standard that are relevant for a scalable Geoprocessing Web (e.g. long-running processes, Web-accessible data outputs), and (iii) to test if the advances in the use of search engines and focused crawlers for finding Web services can be applied for finding geoscience processing systems. Research results show the feasibility of the discovery approach and provide data about the implementation of the Web Processing Service specification. These results also show extensive use of features related to scalability, except for those related to technical and semantic interoperability.

  7. Design and implementation of a seamless and comprehensive integrated medical device interface system for outpatient electronic medical records in a general hospital.

    PubMed

    Choi, Jong Soo; Lee, Jean Hyoung; Park, Jong Hwan; Nam, Han Seung; Kwon, Hyuknam; Kim, Dongsoo; Park, Seung Woo

    2011-04-01

    Implementing an efficient Electronic Medical Record (EMR) system is regarded as one of the key strategies for improving the quality of healthcare services. However, the system's interoperability between medical devices and the EMR is a big barrier to deploying the EMR system in an outpatient clinical setting. The purpose of this study is to design a framework for a seamless and comprehensively integrated medical device interface system, and to develop and implement a system for accelerating the deployment of the EMR system. We designed and developed a framework that could transform data from medical devices into the relevant standards and then store them in the EMR. The framework is composed of 5 interfacing methods according to the types of medical devices utilized at an outpatient clinical setting, registered in Samsung Medical Center (SMC) database. The medical devices used for this study were devices that have microchips embedded or that came packaged with personal computers. The devices are completely integrated with the EMR based on SMC's long term IT strategies. First deployment of integrating 352 medical devices into the EMR took place in April, 2006, and it took about 48 months. By March, 2010, every medical device was interfaced with the EMR. About 66,000 medical examinations per month were performed taking up an average of 50GB of storage space. We surveyed users, mainly the technicians. Out of 73 that responded, 76% of the respondents replied that they were strongly satisfied or satisfied, 20% replied as being neutral and only 4% complained about the speed of the system, which was attributed to the slow speed of the old-fashioned medical devices and computers. The current implementation of the medical device interface system based on the SMC framework significantly streamlines the clinical workflow in a satisfactory manner. 2010 Elsevier Ireland Ltd. All rights reserved.

  8. Protocol standards and implementation within the digital engineering laboratory computer network (DELNET) using the universal network interface device (UNID). Part 2

    NASA Astrophysics Data System (ADS)

    Phister, P. W., Jr.

    1983-12-01

    Development of the Air Force Institute of Technology's Digital Engineering Laboratory Network (DELNET) was continued with the development of an initial draft of a protocol standard for all seven layers as specified by the International Standards Organization's (ISO) Reference Model for Open Systems Interconnections. This effort centered on the restructuring of the Network Layer to perform Datagram routing and to conform to the developed protocol standards and actual software module development of the upper four protocol layers residing within the DELNET Monitor (Zilog MCZ 1/25 Computer System). Within the guidelines of the ISO Reference Model the Transport Layer was developed utilizing the Internet Header Format (IHF) combined with the Transport Control Protocol (TCP) to create a 128-byte Datagram. Also a limited Application Layer was created to pass the Gettysburg Address through the DELNET. This study formulated a first draft for the DELNET Protocol Standard and designed, implemented, and tested the Network, Transport, and Application Layers to conform to these protocol standards.

  9. GeoNetwork powered GI-cat: a geoportal hybrid solution

    NASA Astrophysics Data System (ADS)

    Baldini, Alessio; Boldrini, Enrico; Santoro, Mattia; Mazzetti, Paolo

    2010-05-01

    To the aim of setting up a Spatial Data Infrastructures (SDI) the creation of a system for the metadata management and discovery plays a fundamental role. An effective solution is the use of a geoportal (e.g. FAO/ESA geoportal), that has the important benefit of being accessible from a web browser. With this work we present a solution based integrating two of the available frameworks: GeoNetwork and GI-cat. GeoNetwork is an opensource software designed to improve accessibility of a wide variety of data together with the associated ancillary information (metadata), at different scale and from multidisciplinary sources; data are organized and documented in a standard and consistent way. GeoNetwork implements both the Portal and Catalog components of a Spatial Data Infrastructure (SDI) defined in the OGC Reference Architecture. It provides tools for managing and publishing metadata on spatial data and related services. GeoNetwork allows harvesting of various types of web data sources e.g. OGC Web Services (e.g. CSW, WCS, WMS). GI-cat is a distributed catalog based on a service-oriented framework of modular components and can be customized and tailored to support different deployment scenarios. It can federate a multiplicity of catalogs services, as well as inventory and access services in order to discover and access heterogeneous ESS resources. The federated resources are exposed by GI-cat through several standard catalog interfaces (e.g. OGC CSW AP ISO, OpenSearch, etc.) and by the GI-cat extended interface. Specific components implement mediation services for interfacing heterogeneous service providers, each of which exposes a specific standard specification; such components are called Accessors. These mediating components solve providers data modelmultiplicity by mapping them onto the GI-cat internal data model which implements the ISO 19115 Core profile. Accessors also implement the query protocol mapping; first they translate the query requests expressed according to the interface protocols exposed by GI-cat into the multiple query dialects spoken by the resource service providers. Currently, a number of well-accepted catalog and inventory services are supported, including several OGC Web Services, THREDDS Data Server, SeaDataNet Common Data Index, GBIF and OpenSearch engines. A GeoNetwork powered GI-cat has been developed in order to exploit the best of the two frameworks. The new system uses a modified version of GeoNetwork web interface in order to add the capability of querying also the specified GI-cat catalog and not only the GeoNetwork internal database. The resulting system consists in a geoportal in which GI-cat plays the role of the search engine. This new system allows to distribute the query on the different types of data sources linked to a GI-cat. The metadata results of the query are then visualized by the Geonetwork web interface. This configuration was experimented in the framework of GIIDA, a project of the Italian National Research Council (CNR) focused on data accessibility and interoperability. A second advantage of this solution is achieved setting up a GeoNetwork catalog amongst the accessors of the GI-cat instance. Such a configuration will allow in turn GI-cat to run the query against the internal GeoNetwork database. This allows to have both the harvesting and the metadata editor functionalities provided by GeoNetwork and the distributed search functionality of GI-cat available in a consistent way through the same web interface.

  10. A CBLT and MCST capable VME slave interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wuerthwein, F.; Strohman, C.; Honscheid, K.

    1996-12-31

    We report on the development of a VME slave interface for the CLEO III detector implemented in an ALTERA EPM7256 CPLD. This includes the first implementation of the chained block transfer protocol (CBLT) and multi-cast cycles (MCST) as defined by the VME-P task group of VIPA. Within VME64 there is no operation that guarantees efficient readout of large blocks of data that are sparsely distributed among a series of slave modules in a VME crate. This has led the VME-P task group of VIPA to specify protocols that enable a master to address many slaves at a single address. Whichmore » slave is to drive the data bus is determined by a token passing mechanism that uses the *IACKOUT, *IACKIN daisy chain. This protocol requires no special features from the master besides conformance to VME64. Non-standard features are restricted to the VME slave interface. The CLEO III detector comprises {approximately}400,000 electronic channels that have to be digitized, sparsified, and stored within 20{mu}s in order to incur less than 2% dead time at an anticipated trigger rate of 1000Hz. 95% of these channels are accounted for by only two detector subsystems, the silicon microstrip detector (125,000 channels), and the ring imaging Cerenkov detector (RICH) (230,400 channels). After sparsification either of these two detector subsystems is expected to provide event fragments on the order of 10KBytes, spread over 4, and 8 VME crates, respectively. We developed a chip set that sparsifies, tags, and stores the incoming digital data on the data boards, and includes a VME slave interface that implements MCST and CUT protocols. In this poster, we briefly describe this chip set and then discuss the VME slave interface in detail.« less

  11. Vocabulary services to support scientific data interoperability

    NASA Astrophysics Data System (ADS)

    Cox, Simon; Mills, Katie; Tan, Florence

    2013-04-01

    Shared vocabularies are a core element in interoperable systems. Vocabularies need to be available at run-time, and where the vocabularies are shared by a distributed community this implies the use of web technology to provide vocabulary services. Given the ubiquity of vocabularies or classifiers in systems, vocabulary services are effectively the base of the interoperability stack. In contemporary knowledge organization systems, a vocabulary item is considered a concept, with the "terms" denoting it appearing as labels. The Simple Knowledge Organization System (SKOS) formalizes this as an RDF Schema (RDFS) application, with a bridge to formal logic in Web Ontology Language (OWL). For maximum utility, a vocabulary should be made available through the following interfaces: * the vocabulary as a whole - at an ontology URI corresponding to a vocabulary document * each item in the vocabulary - at the item URI * summaries, subsets, and resources derived by transformation * through the standard RDF web API - i.e. a SPARQL endpoint * through a query form for human users. However, the vocabulary data model may be leveraged directly in a standard vocabulary API that uses the semantics provided by SKOS. SISSvoc3 [1] accomplishes this as a standard set of URI templates for a vocabulary. Any URI comforming to the template selects a vocabulary subset based on the SKOS properties, including labels (skos:prefLabel, skos:altLabel, rdfs:label) and a subset of the semantic relations (skos:broader, skos:narrower, etc). SISSvoc3 thus provides a RESTFul SKOS API to query a vocabulary, but hiding the complexity of SPARQL. It has been implemented using the Linked Data API (LDA) [2], which connects to a SPARQL endpoint. By using LDA, we also get content-negotiation, alternative views, paging, metadata and other functionality provided in a standard way. A number of vocabularies have been formalized in SKOS and deployed by CSIRO, the Australian Bureau of Meteorology (BOM) and their collaborators using SISSvoc3, including: * geologic timescale (multiple versions) * soils classification * definitions from OGC standards * geosciml vocabularies * mining commodities * hyperspectral scalars Several other agencies in Australia have adopted SISSvoc3 for their vocabularies. SISSvoc3 differs from other SKOS-based vocabulary-access APIs such as GEMET [3] and NVS [4] in that (a) the service is decoupled from the content store, (b) the service URI is independent of the content URIs This means that a SISSvoc3 interface can be deployed over any SKOS vocabulary which is available at a SPARQL endpoint. As an example, a SISSvoc3 query and presentation interface has been deployed over the NERC vocabulary service hosted by the BODC, providing a search interface which is not available natively. We use vocabulary services to populate menus in user interfaces, to support data validation, and to configure data conversion routines. Related services built on LDA have also been used as a generic registry interface, and extended for serving gazetteer information. ACKNOWLEDGEMENTS The CSIRO SISSvoc3 implementation is built using the Epimorphics ELDA platform http://code.google.com/p/elda/. We thank Jacqui Githaiga and Terry Rankine for their contributions to SISSvoc design and implementation. REFERENCES 1. SISSvoc3 Specification https://www.seegrid.csiro.au/wiki/Siss/SISSvoc30Specification 2. Linked Data API http://code.google.com/p/linked-data-api/wiki/Specification 3. GEMET https://svn.eionet.europa.eu/projects/Zope/wiki/GEMETWebServiceAPI 4. NVS 2.0 http://vocab.nerc.ac.uk/

  12. Educational Labeling System for Atmospheres (ELSA): Python Tool Development for Archiving Under the PDS4 Standard

    NASA Astrophysics Data System (ADS)

    Neakrase, Lynn; Hornung, Danae; Sweebe, Kathrine; Huber, Lyle; Chanover, Nancy J.; Stevenson, Zena; Berdis, Jodi; Johnson, Joni J.; Beebe, Reta F.

    2017-10-01

    The Research and Analysis programs within NASA’s Planetary Science Division now require archiving of resultant data with the Planetary Data System (PDS) or an equivalent archive. The PDS Atmospheres Node is developing an online environment for assisting data providers with this task. The Educational Labeling System for Atmospheres (ELSA) is being designed with Django/Python coding to provide an easier environment for facilitating not only communication with the PDS node, but also streamlining the process of learning, developing, submitting, and reviewing archive bundles under the new PDS4 archiving standard. Under the PDS4 standard, data are archived in bundles, collections, and basic products that form an organizational hierarchy of interconnected labels that describe the data and relationships between the data and its documentation. PDS4 labels are implemented using Extensible Markup Language (XML), which is an international standard for managing metadata. Potential data providers entering the ELSA environment can learn more about PDS4, plan and develop label templates, and build their archive bundles. ELSA provides an interface to tailor label templates aiding in the creation of required internal Logical Identifiers (URN - Uniform Resource Names) and Context References (missions, instruments, targets, facilities, etc.). The underlying structure of ELSA uses Django/Python code that make maintaining and updating the interface easy to do for our undergraduate/graduate students. The ELSA environment will soon provide an interface for using the tailored templates in a pipeline to produce entire collections of labeled products, essentially building the user’s archive bundle. Once the pieces of the archive bundle are assembled, ELSA provides options for queuing the completed bundle for peer review. The peer review process has also been streamlined for online access and tracking to help make the archiving process with PDS as transparent as possible. We discuss the current status of ELSA and provide examples of its implementation.

  13. Neurotechnology for monitoring and restoring sensory, motor, and autonomic functions

    NASA Astrophysics Data System (ADS)

    Wu, Pae C.; Knaack, Gretchen; Weber, Douglas J.

    2016-05-01

    The rapid and exponential advances in micro- and nanotechnologies over the last decade have enabled devices that communicate directly with the nervous system to measure and influence neural activity. Many of the earliest implementations focused on restoration of sensory and motor function, but as knowledge of physiology advances and technology continues to improve in accuracy, precision, and safety, new modes of engaging with the autonomic system herald an era of health restoration that may augment or replace many conventional pharmacotherapies. DARPA's Biological Technologies Office is continuing to advance neurotechnology by investing in neural interface technologies that are effective, reliable, and safe for long-term use in humans. DARPA's Hand Proprioception and Touch Interfaces (HAPTIX) program is creating a fully implantable system that interfaces with peripheral nerves in amputees to enable natural control and sensation for prosthetic limbs. Beyond standard electrode implementations, the Electrical Prescriptions (ElectRx) program is investing in innovative approaches to minimally or non-invasively interface with the peripheral nervous system using novel magnetic, optogenetic, and ultrasound-based technologies. These new mechanisms of interrogating and stimulating the peripheral nervous system are driving towards unparalleled spatiotemporal resolution, specificity and targeting, and noninvasiveness to enable chronic, human-use applications in closed-loop neuromodulation for the treatment of disease.

  14. Interoperability through standardization: Electronic mail, and X Window systems

    NASA Technical Reports Server (NTRS)

    Amin, Ashok T.

    1993-01-01

    Since the introduction of computing machines, there has been continual advances in computer and communication technologies and approaching limits. The user interface has evolved from a row of switches, character based interface using teletype terminals and then video terminals, to present day graphical user interface. It is expected that next significant advances will come in the availability of services, such as electronic mail and directory services, as the standards for applications are developed and in the 'easy to use' interfaces, such as Graphical User Interface for example Window and X Window, which are being standardized. Various proprietary electronic mail (email) systems are in use within organizations at each center of NASA. Each system provides email services to users within an organization, however the support for email services across organizations and across centers exists at centers to a varying degree and is often easy to use. A recent NASA email initiative is intended 'to provide a simple way to send email across organizational boundaries without disruption of installed base.' The initiative calls for integration of existing organizational email systems through gateways connected by a message switch, supporting X.400 and SMTP protocols, to create a NASA wide email system and for implementation of NASA wide email directory services based on OSI standard X.500. A brief overview of MSFC efforts as a part of this initiative are described. Window based graphical user interfaces make computers easy to use. X window protocol has been developed at Massachusetts Institute of Technology in 1984/1985 to provide uniform window based interface in a distributed computing environment with heterogenous computers. It has since become a standard supported by a number of major manufacturers. Z Windows systems, terminals and workstations, and X Window applications are becoming available. However impact of its use in the Local Area Network environment on the network traffic are not well understood. It is expected that the use of X Windows systems will increase at MSFC especially for Unix based systems. An overview of X Window protocol is presented and its impact on the network traffic is examined. It is proposed that an analytical model of X Window systems in the network environment be developed and validated through the use of measurements to generate application and user profiles.

  15. The ASP Sensor Network: Infrastructure for the Next Generation of NASA Airborne Science

    NASA Astrophysics Data System (ADS)

    Myers, J. S.; Sorenson, C. E.; Van Gilst, D. P.; Duley, A.

    2012-12-01

    A state-of-the-art real-time data communications network is being implemented across the NASA Airborne Science Program core platforms. Utilizing onboard Ethernet networks and satellite communications systems, it is intended to maximize the science return from both single-platform missions and complex multi-aircraft Earth science campaigns. It also provides an open platform for data visualization and synthesis software tools, for use by the science instrument community. This paper will describe the prototype implementations currently deployed on the NASA DC-8 and Global Hawk aircraft, and the ongoing effort to expand the capability to other science platforms. Emphasis will be on the basic network architecture, the enabling hardware, and new standardized instrument interfaces. The new Mission Tools Suite, which provides an web-based user interface, will be also described; together with several example use-cases of this evolving technology.

  16. Software interface for high-speed readout of particle detectors based on the CoaXPress communication standard

    NASA Astrophysics Data System (ADS)

    Hejtmánek, M.; Neue, G.; Voleš, P.

    2015-06-01

    This article is devoted to the software design and development of a high-speed readout application used for interfacing particle detectors via the CoaXPress communication standard. The CoaXPress provides an asymmetric high-speed serial connection over a single coaxial cable. It uses a widely available 75 Ω BNC standard and can operate in various modes with a data throughput ranging from 1.25 Gbps up to 25 Gbps. Moreover, it supports a low speed uplink with a fixed bit rate of 20.833 Mbps, which can be used to control and upload configuration data to the particle detector. The CoaXPress interface is an upcoming standard in medical imaging, therefore its usage promises long-term compatibility and versatility. This work presents an example of how to develop DAQ system for a pixel detector. For this purpose, a flexible DAQ card was developed using the XILINX Spartan 6 FPGA. The DAQ card is connected to the framegrabber FireBird CXP6 Quad, which is plugged in the PCI Express bus of the standard PC. The data transmission was performed between the FPGA and framegrabber card via the standard coaxial cable in communication mode with a bit rate of 3.125 Gbps. Using the Medipix2 Quad pixel detector, the framerate of 100 fps was achieved. The front-end application makes use of the FireBird framegrabber software development kit and is suitable for data acquisition as well as control of the detector through the registers implemented in the FPGA.

  17. New Approaches for DC Balanced SpaceWire

    NASA Technical Reports Server (NTRS)

    Kisin, Alex; Rakow, Glenn

    2016-01-01

    Direct Current (DC) line balanced SpaceWire is attractive for a number of reasons. Firstly, a DC line balanced interface provides the ability to isolate the physical layer with either a transformer or capacitor to achieve higher common mode voltage rejection and/or the complete galvanic isolation in the case of a transformer. Secondly, it provides the possibility to reduce the number of conductors and transceivers in the classical SpaceWire interface by half by eliminating the Strobe line. Depending on the modulator scheme - the clock data recovery frequency requirements may be only twice that of the transmit clock, or even match the transmit clock: depending on the Field Programmable Gate Array (FPGA) decoder design. In this paper, several different implementation scenarios will be discussed. Two of these scenarios are backward compatible with the existing SpaceWire hardware standards except for changes at the character level. Three other scenarios, while decreasing by half the standard SpaceWire hardware components, will require changes at both the character and signal levels and work with fixed rates. Other scenarios with variable data rates will require an additional SpaceWire interface handshake initialization sequence.

  18. High-Rate Digital Receiver Board

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder; Bialas, Thomas; Brambora, Clifford; Fisher, David

    2004-01-01

    A high-rate digital receiver (HRDR) implemented as a peripheral component interface (PCI) board has been developed as a prototype of compact, general-purpose, inexpensive, potentially mass-producible data-acquisition interfaces between telemetry systems and personal computers. The installation of this board in a personal computer together with an analog preprocessor enables the computer to function as a versatile, highrate telemetry-data-acquisition and demodulator system. The prototype HRDR PCI board can handle data at rates as high as 600 megabits per second, in a variety of telemetry formats, transmitted by diverse phase-modulation schemes that include binary phase-shift keying and various forms of quadrature phaseshift keying. Costing less than $25,000 (as of year 2003), the prototype HRDR PCI board supplants multiple racks of older equipment that, when new, cost over $500,000. Just as the development of standard network-interface chips has contributed to the proliferation of networked computers, it is anticipated that the development of standard chips based on the HRDR could contribute to reductions in size and cost and increases in performance of telemetry systems.

  19. An implementation of the programming structural synthesis system (PROSSS)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1981-01-01

    A particular implementation of the programming structural synthesis system (PROSSS) is described. This software system combines a state of the art optimization program, a production level structural analysis program, and user supplied, problem dependent interface programs. These programs are combined using standard command language features existing in modern computer operating systems. PROSSS is explained in general with respect to this implementation along with the steps for the preparation of the programs and input data. Each component of the system is described in detail with annotated listings for clarification. The components include options, procedures, programs and subroutines, and data files as they pertain to this implementation. An example exercising each option in this implementation to allow the user to anticipate the type of results that might be expected is presented.

  20. Architecture for Survivable System Processing (ASSP)

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.

    1991-11-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  1. Architecture for Survivable System Processing (ASSP)

    NASA Technical Reports Server (NTRS)

    Wood, Richard J.

    1991-01-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  2. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  3. An MPA-IO interface to HPSS

    NASA Technical Reports Server (NTRS)

    Jones, Terry; Mark, Richard; Martin, Jeanne; May, John; Pierce, Elsie; Stanberry, Linda

    1996-01-01

    This paper describes an implementation of the proposed MPI-IO (Message Passing Interface - Input/Output) standard for parallel I/O. Our system uses third-party transfer to move data over an external network between the processors where it is used and the I/O devices where it resides. Data travels directly from source to destination, without the need for shuffling it among processors or funneling it through a central node. Our distributed server model lets multiple compute nodes share the burden of coordinating data transfers. The system is built on the High Performance Storage System (HPSS), and a prototype version runs on a Meiko CS-2 parallel computer.

  4. A Reference Implementation of the OGC CSW EO Standard for the ESA HMA-T project

    NASA Astrophysics Data System (ADS)

    Bigagli, Lorenzo; Boldrini, Enrico; Papeschi, Fabrizio; Vitale, Fabrizio

    2010-05-01

    This work was developed in the context of the ESA Heterogeneous Missions Accessibility (HMA) project, whose main objective is to involve the stakeholders, namely National space agencies, satellite or mission owners and operators, in an harmonization and standardization process of their ground segment services and related interfaces. Among HMA objectives was the specification, conformance testing, and experimentation of two Extension Packages (EPs) of the ebRIM Application Profile (AP) of the OGC Catalog Service for the Web (CSW) specification: the Earth Observation Products (EO) EP (OGC 06-131) and the Cataloguing of ISO Metadata (CIM) EP (OGC 07-038). Our contributions have included the development and deployment of Reference Implementations (RIs) for both the above specifications, and their integration with the ESA Service Support Environment (SSE). The RIs are based on the GI-cat framework, an implementation of a distributed catalog service, able to query disparate Earth and Space Science data sources (e.g. OGC Web Services, Unidata THREDDS) and to expose several standard interfaces for data discovery (e.g. OGC CSW ISO AP). Following our initial planning, the GI-cat framework has been extended in order to expose the CSW.ebRIM-CIM and CSW.ebRIM-EO interfaces, and to distribute queries to CSW.ebRIM-CIM and CSW.ebRIM-EO data sources. We expected that a mapping strategy would suffice for accommodating CIM, but this proved to be unpractical during implementation. Hence, a model extension strategy was eventually implemented for both the CIM and EO EPs, and the GI-cat federal model was enhanced in order to support the underlying ebRIM AP. This work has provided us with new insights into the different data models for geospatial data, and the technologies for their implementation. The extension is used by suitable CIM and EO profilers (front-end mediator components) and accessors (back-end mediator components), that relate ISO 19115 concepts to EO and CIM ones. Moreover, a mapping to GI-cat federal model was developed for each EP (quite limited for EO; complete for CIM), in order to enable the discovery of resources through any of GI-cat profilers. The query manager was also improved. GI-cat-EO and -CIM installation packages were made available for distribution, and two RI instances were deployed on the Amazon EC2 facility (plus an ad-hoc instance returning incorrect control data). Integration activities of the EO RI with the ESA SSE Portal for Earth Observation Products were also successfully carried on. During our work, we have contributed feedback and comments to the CIM and EO EP specification working groups. Our contributions resulted in version 0.2.5 of the EO EP, recently approved as an OGC standard, and were useful to consolidate version 0.1.11 of the CIM EP (still being developed).

  5. Model driven development of clinical information sytems using openEHR.

    PubMed

    Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, Jim

    2011-01-01

    openEHR and the recent international standard (ISO 13606) defined a model driven software development methodology for health information systems. However there is little evidence in the literature describing implementation; especially for desktop clinical applications. This paper presents an implementation pathway using .Net/C# technology for Microsoft Windows desktop platforms. An endoscopy reporting application driven by openEHR Archetypes and Templates has been developed. A set of novel GUI directives has been defined and presented which guides the automatic graphical user interface generator to render widgets properly. We also reveal the development steps and important design decisions; from modelling to the final software product. This might provide guidance for other developers and form evidence required for the adoption of these standards for vendors and national programs alike.

  6. Visible light communications for the implementation of internet-of-things

    NASA Astrophysics Data System (ADS)

    Chen, Chia-Wei; Wang, Wei-Chung; Wu, Jhao-Ting; Chen, Hung-Yu; Liang, Kevin; Wei, Liang-Yu; Hsu, Yung; Hsu, Chin-Wei; Chow, Chi-Wai; Yeh, Chien-Hung; Liu, Yang; Hsieh, Hsiang-Chin; Chen, Yen-Ting

    2016-06-01

    It is predicted that the number of internet-of-things (IoT) devices will be >28 billion in 2020. Due to the shortage of the conventional radio-frequency spectrum, using visible light communication (VLC) for IoT can be promising. IoT networks may only require very low-data rate communication for transmitting sensing or identity information. The implementation of a VLC link on existing computer communication standards and interfaces is important. Among the standards, universal asynchronous receiver/transmitter (UART) is very popular. We propose and demonstrate a VLC-over-UART system. Bit error rate analysis is performed. Different components and modules used in the proposed VLC-over-UART system are discussed. Then, we also demonstrate a real-time simultaneous temperature, humidity, and illuminance monitoring using the proposed VLC link.

  7. Space Telecommunications Radio Systems (STRS) Hardware Architecture Standard: Release 1.0 Hardware Section

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Kacpura, Thomas J.; Smith, Carl R.; Liebetreu, John; Hill, Gary; Mortensen, Dale J.; Andro, Monty; Scardelletti, Maximilian C.; Farrington, Allen

    2008-01-01

    This report defines a hardware architecture approach for software-defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general-purpose processors, digital signal processors, field programmable gate arrays, and application-specific integrated circuits (ASICs) in addition to flexible and tunable radiofrequency front ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and interfaces. The modules are a logical division of common radio functions that compose a typical communication radio. This report describes the architecture details, the module definitions, the typical functions on each module, and the module interfaces. Tradeoffs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify a physical implementation internally on each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.

  8. Single board system for fuzzy inference

    NASA Technical Reports Server (NTRS)

    Symon, James R.; Watanabe, Hiroyuki

    1991-01-01

    The very large scale integration (VLSI) implementation of a fuzzy logic inference mechanism allows the use of rule-based control and decision making in demanding real-time applications. Researchers designed a full custom VLSI inference engine. The chip was fabricated using CMOS technology. The chip consists of 688,000 transistors of which 476,000 are used for RAM memory. The fuzzy logic inference engine board system incorporates the custom designed integrated circuit into a standard VMEbus environment. The Fuzzy Logic system uses Transistor-Transistor Logic (TTL) parts to provide the interface between the Fuzzy chip and a standard, double height VMEbus backplane, allowing the chip to perform application process control through the VMEbus host. High level C language functions hide details of the hardware system interface from the applications level programmer. The first version of the board was installed on a robot at Oak Ridge National Laboratory in January of 1990.

  9. Implementation of medical monitor system based on networks

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Cao, Yuzhen; Zhang, Lixin; Ding, Mingshi

    2006-11-01

    In this paper, the development trend of medical monitor system is analyzed and portable trend and network function become more and more popular among all kinds of medical monitor devices. The architecture of medical network monitor system solution is provided and design and implementation details of medical monitor terminal, monitor center software, distributed medical database and two kind of medical information terminal are especially discussed. Rabbit3000 system is used in medical monitor terminal to implement security administration of data transfer on network, human-machine interface, power management and DSP interface while DSP chip TMS5402 is used in signal analysis and data compression. Distributed medical database is designed for hospital center according to DICOM information model and HL7 standard. Pocket medical information terminal based on ARM9 embedded platform is also developed to interactive with center database on networks. Two kernels based on WINCE are customized and corresponding terminal software are developed for nurse's routine care and doctor's auxiliary diagnosis. Now invention patent of the monitor terminal is approved and manufacture and clinic test plans are scheduled. Applications for invention patent are also arranged for two medical information terminals.

  10. A generic interface element for COMET-AR

    NASA Technical Reports Server (NTRS)

    Mccleary, Susan L.; Aminpour, Mohammad A.

    1995-01-01

    The implementation of an interface element capability within the COMET-AR software system is described. The report is intended for use by both users of currently implemented interface elements and developers of new interface element formulations. Guidance on the use of COMET-AR is given. A glossary is provided as an Appendix to this report for readers unfamiliar with the jargon of COMET-AR. A summary of the currently implemented interface element formulation is presented in Section 7.3 of this report.

  11. Architectural approaches for HL7-based health information systems implementation.

    PubMed

    López, D M; Blobel, B

    2010-01-01

    Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.

  12. High-precision shape representation using a neuromorphic vision sensor with synchronous address-event communication interface

    NASA Astrophysics Data System (ADS)

    Belbachir, A. N.; Hofstätter, M.; Litzenberger, M.; Schön, P.

    2009-10-01

    A synchronous communication interface for neuromorphic temporal contrast vision sensors is described and evaluated in this paper. This interface has been designed for ultra high-speed synchronous arbitration of a temporal contrast image sensors pixels' data. Enabling high-precision timestamping, this system demonstrates its uniqueness for handling peak data rates and preserving the main advantage of the neuromorphic electronic systems, that is high and accurate temporal resolution. Based on a synchronous arbitration concept, the timestamping has a resolution of 100 ns. Both synchronous and (state-of-the-art) asynchronous arbiters have been implemented in a neuromorphic dual-line vision sensor chip in a standard 0.35 µm CMOS process. The performance analysis of both arbiters and the advantages of the synchronous arbitration over asynchronous arbitration in capturing high-speed objects are discussed in detail.

  13. LANES - LOCAL AREA NETWORK EXTENSIBLE SIMULATOR

    NASA Technical Reports Server (NTRS)

    Gibson, J.

    1994-01-01

    The Local Area Network Extensible Simulator (LANES) provides a method for simulating the performance of high speed local area network (LAN) technology. LANES was developed as a design and analysis tool for networking on board the Space Station. The load, network, link and physical layers of a layered network architecture are all modeled. LANES models to different lower-layer protocols, the Fiber Distributed Data Interface (FDDI) and the Star*Bus. The load and network layers are included in the model as a means of introducing upper-layer processing delays associated with message transmission; they do not model any particular protocols. FDDI is an American National Standard and an International Organization for Standardization (ISO) draft standard for a 100 megabit-per-second fiber-optic token ring. Specifications for the LANES model of FDDI are taken from the Draft Proposed American National Standard FDDI Token Ring Media Access Control (MAC), document number X3T9.5/83-16 Rev. 10, February 28, 1986. This is a mature document describing the FDDI media-access-control protocol. Star*Bus, also known as the Fiber Optic Demonstration System, is a protocol for a 100 megabit-per-second fiber-optic star-topology LAN. This protocol, along with a hardware prototype, was developed by Sperry Corporation under contract to NASA Goddard Space Flight Center as a candidate LAN protocol for the Space Station. LANES can be used to analyze performance of a networking system based on either FDDI or Star*Bus under a variety of loading conditions. Delays due to upper-layer processing can easily be nullified, allowing analysis of FDDI or Star*Bus as stand-alone protocols. LANES is a parameter-driven simulation; it provides considerable flexibility in specifying both protocol an run-time parameters. Code has been optimized for fast execution and detailed tracing facilities have been included. LANES was written in FORTRAN 77 for implementation on a DEC VAX under VMS 4.6. It consists of two programs, a simulation program and a user-interface program. The simulation program requires the SLAM II simulation library from Pritsker and Associates, W. Lafayette IN; the user interface is implemented using the Ingres database manager from Relational Technology, Inc. Information about running the simulation program without the user-interface program is contained in the documentation. The memory requirement is 129,024 bytes. LANES was developed in 1988.

  14. Easy access to geophysical data sets at the IRIS Data Management Center

    NASA Astrophysics Data System (ADS)

    Trabant, C.; Ahern, T.; Suleiman, Y.; Karstens, R.; Weertman, B.

    2012-04-01

    At the IRIS Data Management Center (DMC) we primarily manage seismological data but also have other geophysical data sets for related fields including atmospheric pressure and gravity measurements and higher level data products derived from raw data. With a few exceptions all data managed by the IRIS DMC are openly available and we serve an international research audience. These data are available via a number of different mechanisms from batch requests submitted through email, web interfaces, near real time streams and more recently web services. Our initial suite of web services offer access to almost all of the raw data and associated metadata managed at the DMC. In addition, we offer services that apply processing to the data before it is sent to the user. Web service technologies are ubiquitous with support available in nearly every programming language and operating system. By their nature web services are programmatic interfaces, but by choosing a simple subset of web service methods we make our data available to a very broad user base. These interfaces will be usable by professional developers as well as non-programmers. Whenever possible we chose open and recognized standards. The data returned to the user is in a variety of formats depending on type, including FDSN SEED, QuakeML, StationXML, ASCII, PNG images and in some cases where no appropriate standard could be found a customized XML format. To promote easy access to seismological data for all researchers we are coordinating with international partners to define web service interfaces standards. Additionally we are working with key partners in Europe to complete the initial implementation of these services. Once a standard has been adopted and implemented at multiple data centers researchers will be able to use the same request tools to access data across multiple data centers. The web services that apply on-demand processing to requested data include the capability to apply instrument corrections and format translations which ultimately allows more researchers to use the data without knowledge of specific data and metadata formats. In addition to serving as a new platform on top of which research scientists will build advanced processing tools we anticipate that they will result in more data being accessible by more users.

  15. ''Towards a High-Performance and Robust Implementation of MPI-IO on Top of GPFS''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prost, J.P.; Tremann, R.; Blackwore, R.

    2000-01-11

    MPI-IO/GPFS is a prototype implementation of the I/O chapter of the Message Passing Interface (MPI) 2 standard. It uses the IBM General Parallel File System (GPFS), with prototyped extensions, as the underlying file system. this paper describes the features of this prototype which support its high performance and robustness. The use of hints at the file system level and at the MPI-IO level allows tailoring the use of the file system to the application needs. Error handling in collective operations provides robust error reporting and deadlock prevention in case of returning errors.

  16. A network collaboration implementing technology to improve medication dispensing and administration in critical access hospitals.

    PubMed

    Wakefield, Douglas S; Ward, Marcia M; Loes, Jean L; O'Brien, John

    2010-01-01

    We report how seven independent critical access hospitals collaborated with a rural referral hospital to standardize workflow policies and procedures while jointly implementing the same health information technologies (HITs) to enhance medication care processes. The study hospitals implemented the same electronic health record, computerized provider order entry, pharmacy information systems, automated dispensing cabinets (ADC), and barcode medication administration systems. We conducted interviews and examined project documents to explore factors underlying the successful implementation of ADC and barcode medication administration across the network hospitals. These included a shared culture of collaboration; strategic sequencing of HIT component implementation; interface among HIT components; strategic placement of ADCs; disciplined use and sharing of workflow analyses linked with HIT applications; planning for workflow efficiencies; acquisition of adequate supply of HIT-related devices; and establishing metrics to monitor HIT use and outcomes.

  17. The GEOSS Component and Service Registry

    NASA Astrophysics Data System (ADS)

    Di, L.; Bai, Y.; Shen, D.; Shao, Y.; Shrestha, R.; Wang, H.; Nebert, D. D.

    2011-12-01

    Petabytes of Earth science data have been accumulated through space- and air-borne Earth observation programs during the last several decades. The data are valuable both scientifically and socioeconomically. The value of these data could be further increased significantly if the data from these programs can be easily discovered, accessed, integrated, and analyzed. The Global Earth Observation System of Systems (GEOSS) is addressing this need. Coordinated by the Group on Earth Observations (or GEO), a voluntary partnership of 86 governments, the European Commission, and 61 intergovernmental, international, and regional organizations has been working on implementing GEOSS for a number of years. After four years of international collaboration, the GEOSS Common Infrastructure (GCI) has been established. GCI consists of the Standards and Interoperability Registry (SIR), the Component and Service Registry (CSR), the GEO clearinghouse, and the GEO Portal. The SIR maintains the list of the public standards recognized by the GEO. CSR provides a centralized registry for available Earth Observation resources. The GEO clearinghouse works as a single search facility for GEOSS-wide resources and the GEO Portal provides an integrated Web-based interfaces for users. Since January 2007, researchers at CSISS, GMU have collaborated with officials from the Federal Geographic Data Committee (FGDC) on designing, implementing, maintaining, and upgrading CSR. Currently CSR provides the following capabilities for data providers: user registration, resource registration, and service interface registration. The CSR clients can discover the resources registered in CSR through OGC Catalog for Web (CSW), UUDI, and other standard interfaces. During the resource registration process, providers may define detailed descriptive information for their resources, in particular, the targeted societal benefit area and sub-areas of focus, and the targeted critical Earth Observations. The service interfaces to these resources can also be registered with CSR, where standards references information may be supplied. Providers may also self nominate their resources to be part of the GEOSS-DataCORE. The GEOSS-DataCORE was initialized early this year for establishing a distributed pool of documented, well-calibrated, and persistently available key Earth observation datasets, contributed by the GEO community on the basis of full and open exchange (at no more than the cost of reproduction and distribution) and unrestricted access. In this presentation, the CSR system architecture, use cases, and implementation details will be presented and the registration processes illustrated. In addition, how these registered resources could be further discovered and accessed by GEOSS human users and machine clients will also be discussed. Such information is valuable for agencies to promote their data products through GEOSS. It could also help scientists to advertise their research products and initialize new integrative and cooperative efforts within the GEOSS community.

  18. Media independent interface. Interface control document

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A Media Independent Interface (MII) is specified, using current standards in the industry. The MII is described in hierarchical fashion. At the base are IEEE/International Standards Organization (ISO) documents (standards) which describe the functionality of the software modules or layers and their interconnection. These documents describe primitives which are to transcent the MII. The intent of the MII is to provide a universal interface to one or more Media Access Contols (MACs) for the Logical Link Controller and Station Manager. This interface includes both a standardized electrical and mechanical interface and a standardized functional specification which defines the services expected from the MAC.

  19. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE PAGES

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto; ...

    2017-09-15

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  20. ELSI: A unified software interface for Kohn-Sham electronic structure solvers

    NASA Astrophysics Data System (ADS)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; García, Alberto; Huhn, William P.; Jacquelin, Mathias; Jia, Weile; Lange, Björn; Lin, Lin; Lu, Jianfeng; Mi, Wenhui; Seifitokaldani, Ali; Vázquez-Mayagoitia, Álvaro; Yang, Chao; Yang, Haizhao; Blum, Volker

    2018-01-01

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aims to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. Comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.

  1. SVM Classifier - a comprehensive java interface for support vector machine classification of microarray data.

    PubMed

    Pirooznia, Mehdi; Deng, Youping

    2006-12-12

    Graphical user interface (GUI) software promotes novelty by allowing users to extend the functionality. SVM Classifier is a cross-platform graphical application that handles very large datasets well. The purpose of this study is to create a GUI application that allows SVM users to perform SVM training, classification and prediction. The GUI provides user-friendly access to state-of-the-art SVM methods embodied in the LIBSVM implementation of Support Vector Machine. We implemented the java interface using standard swing libraries. We used a sample data from a breast cancer study for testing classification accuracy. We achieved 100% accuracy in classification among the BRCA1-BRCA2 samples with RBF kernel of SVM. We have developed a java GUI application that allows SVM users to perform SVM training, classification and prediction. We have demonstrated that support vector machines can accurately classify genes into functional categories based upon expression data from DNA microarray hybridization experiments. Among the different kernel functions that we examined, the SVM that uses a radial basis kernel function provides the best performance. The SVM Classifier is available at http://mfgn.usm.edu/ebl/svm/.

  2. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  3. Chemical Transformation System: Cloud Based ...

    EPA Pesticide Factsheets

    Integrated Environmental Modeling (IEM) systems that account for the fate/transport of organics frequently require physicochemical properties as well as transformation products. A myriad of chemical property databases exist but these can be difficult to access and often do not contain the proprietary chemicals that environmental regulators must consider. We are building the Chemical Transformation System (CTS) to facilitate model parameterization and analysis. CTS integrates a number of physicochemical property calculators into the system including EPI Suite, SPARC, TEST and ChemAxon. The calculators are heterogeneous in their scientific methodologies, technology implementations and deployment stacks. CTS also includes a chemical transformation processing engine that has been loaded with reaction libraries for human biotransformation, abiotic reduction and abiotic hydrolysis. CTS implements a common interface for the disparate calculators accepting molecular identifiers (SMILES, IUPAC, CAS#, user-drawn molecule) before submission for processing. To make the system as accessible as possible and provide a consistent programmatic interface, we wrapped the calculators in a standardized RESTful Application Programming Interface (API) which makes it capable of servicing a much broader spectrum of clients without constraints to interoperability such as operating system or programming language. CTS is hosted in a shared cloud environment, the Quantitative Environmental

  4. Design and implementation of an inter-agency, multi-mission space flight operations network interface

    NASA Technical Reports Server (NTRS)

    Byrne, R.; Scharf, M.; Doan, D.; Liu, J.; Willems, A.

    2004-01-01

    An advanced network interface was designed and implemented by a team from the Jet Propulsion Lab with support from the European Space Operations Center. This poster shows the requirements for the interface, the design, the topology, the testing and lessons learned from the whole implementation.

  5. A comparison of high-speed links, their commercial support and ongoing R&D activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, H.L.; Barsotti, E.; Zimmermann, S.

    Technological advances and a demanding market have forced the development of higher bandwidth communication standards for networks, data links and busses. Most of these emerging standards are gathering enough momentum that their widespread availability and lower prices are anticipated. The hardware and software that support the physical media for most of these links is currently available, allowing the user community to implement fairly high-bandwidth data links and networks with commercial components. Also, switches needed to support these networks are available or being developed. The commercial suppose of high-bandwidth data links, networks and switching fabrics provides a powerful base for themore » implementation of high-bandwidth data acquisition systems. A large data acquisition system like the one for the Solenoidal Detector Collaboration (SDC) at the SSC can benefit from links and networks that support an integrated systems engineering approach, for initialization, downloading, diagnostics, monitoring, hardware integration and event data readout. The issue that our current work addresses is the possibility of having a channel/network that satisfies the requirements of an integrated data acquisition system. In this paper we present a brief description of high-speed communication links and protocols that we consider of interest for high energy physic High Performance Parallel Interface (HIPPI). Serial HIPPI, Fibre Channel (FC) and Scalable Coherent Interface (SCI). In addition, the initial work required to implement an SDC-like data acquisition system is described.« less

  6. A comparison of high-speed links, their commercial support and ongoing R D activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, H.L.; Barsotti, E.; Zimmermann, S.

    Technological advances and a demanding market have forced the development of higher bandwidth communication standards for networks, data links and busses. Most of these emerging standards are gathering enough momentum that their widespread availability and lower prices are anticipated. The hardware and software that support the physical media for most of these links is currently available, allowing the user community to implement fairly high-bandwidth data links and networks with commercial components. Also, switches needed to support these networks are available or being developed. The commercial suppose of high-bandwidth data links, networks and switching fabrics provides a powerful base for themore » implementation of high-bandwidth data acquisition systems. A large data acquisition system like the one for the Solenoidal Detector Collaboration (SDC) at the SSC can benefit from links and networks that support an integrated systems engineering approach, for initialization, downloading, diagnostics, monitoring, hardware integration and event data readout. The issue that our current work addresses is the possibility of having a channel/network that satisfies the requirements of an integrated data acquisition system. In this paper we present a brief description of high-speed communication links and protocols that we consider of interest for high energy physic High Performance Parallel Interface (HIPPI). Serial HIPPI, Fibre Channel (FC) and Scalable Coherent Interface (SCI). In addition, the initial work required to implement an SDC-like data acquisition system is described.« less

  7. A Browser-Based Multi-User Working Environment for Physicists

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Fischer, R.; Glaser, C.; Klingebiel, D.; Komm, M.; Müller, G.; Rieger, M.; Steggemann, J.; Urban, M.; Winchen, T.

    2014-06-01

    Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable clientserver system allows analyses to be performed in sizable teams, and disburdens the individual physicist from installing and maintaining a software environment. The VISPA graphical interfaces are implemented in HTML, JavaScript and extensions to the Python webserver. The webserver uses SSH and RPC to access user data, code and processes on remote sites. As example applications we present graphical interfaces for steering the reconstruction framework OFFLINE of the Pierre-Auger experiment, and the analysis development toolkit PXL. The browser based VISPA system was field-tested in biweekly homework of a third year physics course by more than 100 students. We discuss the system deployment and the evaluation by the students.

  8. COTS-Based Fault Tolerance in Deep Space: Qualitative and Quantitative Analyses of a Bus Network Architecture

    NASA Technical Reports Server (NTRS)

    Tai, Ann T.; Chau, Savio N.; Alkalai, Leon

    2000-01-01

    Using COTS products, standards and intellectual properties (IPs) for all the system and component interfaces is a crucial step toward significant reduction of both system cost and development cost as the COTS interfaces enable other COTS products and IPs to be readily accommodated by the target system architecture. With respect to the long-term survivable systems for deep-space missions, the major challenge for us is, under stringent power and mass constraints, to achieve ultra-high reliability of the system comprising COTS products and standards that are not developed for mission-critical applications. The spirit of our solution is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper, we discuss our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. We first derive and qualitatively analyze a -'stacktree topology" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.

  9. Chapter 3. Coordination and collaboration with interface units. Recommendations and standard operating procedures for intensive care unit and hospital preparations for an influenza epidemic or mass disaster.

    PubMed

    Joynt, Gavin M; Loo, Shi; Taylor, Bruce L; Margalit, Gila; Christian, Michael D; Sandrock, Christian; Danis, Marion; Leoniv, Yuval; Sprung, Charles L

    2010-04-01

    To provide recommendations and standard operating procedures (SOPs) for intensive care unit (ICU) and hospital preparations for an influenza pandemic or mass disaster with a specific focus on enhancing coordination and collaboration between the ICU and other key stakeholders. Based on a literature review and expert opinion, a Delphi process was used to define the essential topics including coordination and collaboration. Key recommendations include: (1) establish an Incident Management System with Emergency Executive Control Groups at facility, local, regional/state or national levels to exercise authority and direction over resource use and communications; (2) develop a system of communication, coordination and collaboration between the ICU and key interface departments within the hospital; (3) identify key functions or processes requiring coordination and collaboration, the most important of these being manpower and resources utilization (surge capacity) and re-allocation of personnel, equipment and physical space; (4) develop processes to allow smooth inter-departmental patient transfers; (5) creating systems and guidelines is not sufficient, it is important to: (a) identify the roles and responsibilities of key individuals necessary for the implementation of the guidelines; (b) ensure that these individuals are adequately trained and prepared to perform their roles; (c) ensure adequate equipment to allow key coordination and collaboration activities; (d) ensure an adequate physical environment to allow staff to properly implement guidelines; (6) trigger events for determining a crisis should be defined. Judicious planning and adoption of protocols for coordination and collaboration with interface units are necessary to optimize outcomes during a pandemic.

  10. Accelerator controls at CERN: Some converging trends

    NASA Astrophysics Data System (ADS)

    Kuiper, B.

    1990-08-01

    CERN's growing services to the high-energy physics community using frozen resources has led to the implementation of "Technical Boards", mandated to assist the management by making recommendations for rationalizations in various technological domains. The Board on Process Control and Electronics for Accelerators, TEBOCO, has emphasized four main lines which might yield economy in resources. First, a common architecture for accelerator controls has been agreed between the three accelerator divisions. Second, a common hardware/software kit has been defined, from which the large majority of future process interfacing may be composed. A support service for this kit is an essential part of the plan. Third, high-level protocols have been developed for standardizing access to process devices. They derive from agreed standard models of the devices and involve a standard control message. This should ease application development and mobility of equipment. Fourth, a common software engineering methodology and a commercial package of application development tools have been adopted. Some rationalization in the field of the man-machine interface and in matters of synchronization is also under way.

  11. The new Planetary Science Archive (PSA): Exploration and discovery of scientific datasets from ESA's planetary missions

    NASA Astrophysics Data System (ADS)

    Martinez, Santa; Besse, Sebastien; Heather, Dave; Barbarisi, Isa; Arviset, Christophe; De Marchi, Guido; Barthelemy, Maud; Docasal, Ruben; Fraga, Diego; Grotheer, Emmanuel; Lim, Tanya; Macfarlane, Alan; Rios, Carlos; Vallejo, Fran; Saiz, Jaime; ESDC (European Space Data Centre) Team

    2016-10-01

    The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific datasets through various interfaces at http://archives.esac.esa.int/psa. All datasets are scientifically peer-reviewed by independent scientists, and are compliant with the Planetary Data System (PDS) standards. The PSA is currently implementing a number of significant improvements, mostly driven by the evolution of the PDS standard, and the growing need for better interfaces and advanced applications to support science exploitation. The newly designed PSA will enhance the user experience and will significantly reduce the complexity for users to find their data promoting one-click access to the scientific datasets with more specialised views when needed. This includes a better integration with Planetary GIS analysis tools and Planetary interoperability services (search and retrieve data, supporting e.g. PDAP, EPN-TAP). It will be also up-to-date with versions 3 and 4 of the PDS standards, as PDS4 will be used for ESA's ExoMars and upcoming BepiColombo missions. Users will have direct access to documentation, information and tools that are relevant to the scientific use of the dataset, including ancillary datasets, Software Interface Specification (SIS) documents, and any tools/help that the PSA team can provide. A login mechanism will provide additional functionalities to the users to aid / ease their searches (e.g. saving queries, managing default views). This contribution will introduce the new PSA, its key features and access interfaces.

  12. Implementation and Evaluation of Four Interoperable Open Standards for the Internet of Things.

    PubMed

    Jazayeri, Mohammad Ali; Liang, Steve H L; Huang, Chih-Yuan

    2015-09-22

    Recently, researchers are focusing on a new use of the Internet called the Internet of Things (IoT), in which enabled electronic devices can be remotely accessed over the Internet. As the realization of IoT concept is still in its early stages, manufacturers of Internet-connected devices and IoT web service providers are defining their proprietary protocols based on their targeted applications. Consequently, IoT becomes heterogeneous in terms of hardware capabilities and communication protocols. Addressing these heterogeneities by following open standards is a necessary step to communicate with various IoT devices. In this research, we assess the feasibility of applying existing open standards on resource-constrained IoT devices. The standard protocols developed in this research are OGC PUCK over Bluetooth, TinySOS, SOS over CoAP, and OGC SensorThings API. We believe that by hosting open standard protocols on IoT devices, not only do the devices become self-describable, self-contained, and interoperable, but innovative applications can also be easily developed with standardized interfaces. In addition, we use memory consumption, request message size, response message size, and response latency to benchmark the efficiency of the implemented protocols. In all, this research presents and evaluates standard-based solutions to better understand the feasibility of applying existing standards to the IoT vision.

  13. SCHeMA web-based observation data information system

    NASA Astrophysics Data System (ADS)

    Novellino, Antonio; Benedetti, Giacomo; D'Angelo, Paolo; Confalonieri, Fabio; Massa, Francesco; Povero, Paolo; Tercier-Waeber, Marie-Louise

    2016-04-01

    It is well recognized that the need of sharing ocean data among non-specialized users is constantly increasing. Initiatives that are built upon international standards will contribute to simplify data processing and dissemination, improve user-accessibility also through web browsers, facilitate the sharing of information across the integrated network of ocean observing systems; and ultimately provide a better understanding of the ocean functioning. The SCHeMA (Integrated in Situ Chemical MApping probe) Project is developing an open and modular sensing solution for autonomous in situ high resolution mapping of a wide range of anthropogenic and natural chemical compounds coupled to master bio-physicochemical parameters (www.schema-ocean.eu). The SCHeMA web system is designed to ensure user-friendly data discovery, access and download as well as interoperability with other projects through a dedicated interface that implements the Global Earth Observation System of Systems - Common Infrastructure (GCI) recommendations and the international Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE) standards. This approach will insure data accessibility in compliance with major European Directives and recommendations. Being modular, the system allows the plug-and-play of commercially available probes as well as new sensor probess under development within the project. The access to the network of monitoring probes is provided via a web-based system interface that, being implemented as a SOS (Sensor Observation Service), is providing standard interoperability and access tosensor observations systems through O&M standard - as well as sensor descriptions - encoded in Sensor Model Language (SensorML). The use of common vocabularies in all metadatabases and data formats, to describe data in an already harmonized and common standard is a prerequisite towards consistency and interoperability. Therefore, the SCHeMA SOS has adopted the SeaVox common vocabularies populated by SeaDataNet network of National Oceanographic Data Centres. The SCHeMA presentation layer, a fundamental part of the software architecture, offers to the user a bidirectional interaction with the integrated system allowing to manage and configure the sensor probes; view the stored observations and metadata, and handle alarms. The overall structure of the web portal developed within the SCHeMA initiative (Sensor Configuration, development of Core Profile interface for data access via OGC standard, external services such as web services, WMS, WFS; and Data download and query manager) will be presented and illustrated with examples of ongoing tests in costal and open sea.

  14. Eigensolver for a Sparse, Large Hermitian Matrix

    NASA Technical Reports Server (NTRS)

    Tisdale, E. Robert; Oyafuso, Fabiano; Klimeck, Gerhard; Brown, R. Chris

    2003-01-01

    A parallel-processing computer program finds a few eigenvalues in a sparse Hermitian matrix that contains as many as 100 million diagonal elements. This program finds the eigenvalues faster, using less memory, than do other, comparable eigensolver programs. This program implements a Lanczos algorithm in the American National Standards Institute/ International Organization for Standardization (ANSI/ISO) C computing language, using the Message Passing Interface (MPI) standard to complement an eigensolver in PARPACK. [PARPACK (Parallel Arnoldi Package) is an extension, to parallel-processing computer architectures, of ARPACK (Arnoldi Package), which is a collection of Fortran 77 subroutines that solve large-scale eigenvalue problems.] The eigensolver runs on Beowulf clusters of computers at the Jet Propulsion Laboratory (JPL).

  15. Implementation and evaluation of a community-based medication reconciliation (CMR) system at the hospital-community interface of care.

    PubMed

    Bailey, Allan L; Moe, Grace; Moe, Jessica; Oland, Ryan

    2009-01-01

    The WestView community-based medication reconciliation (CMR) aims to decrease medication error risk. A clinical pharmacist visits patients' homes within 72 hours of hospital discharge and compares medications in discharge orders, family physicians' charts, community pharmacy profiles and in the home. Discrepancies are discussed and reconciled with the dispenser, hospital prescriber and follow-up care provider. The CMR demonstrates successful integration that is patient-centred and standardized, bridging the hospital-community interface and improving information flow and communication channels across a family-physician-led multi-disciplinary team. A concurrent research study will evaluate the impact of CMR on health services utilization and to develop a risk prediction model.

  16. RefPrimeCouch—a reference gene primer CouchApp

    PubMed Central

    Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus

    2013-01-01

    To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html PMID:24368831

  17. RefPrimeCouch--a reference gene primer CouchApp.

    PubMed

    Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus

    2013-01-01

    To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html.

  18. CHIMERA II - A real-time multiprocessing environment for sensor-based robot control

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1989-01-01

    A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.

  19. Interface design for CMOS-integrated Electrochemical Impedance Spectroscopy (EIS) biosensors.

    PubMed

    Manickam, Arun; Johnson, Christopher Andrew; Kavusi, Sam; Hassibi, Arjang

    2012-10-29

    Electrochemical Impedance Spectroscopy (EIS) is a powerful electrochemical technique to detect biomolecules. EIS has the potential of carrying out label-free and real-time detection, and in addition, can be easily implemented using electronic integrated circuits (ICs) that are built through standard semiconductor fabrication processes. This paper focuses on the various design and optimization aspects of EIS ICs, particularly the bio-to-semiconductor interface design. We discuss, in detail, considerations such as the choice of the electrode surface in view of IC manufacturing, surface linkers, and development of optimal bio-molecular detection protocols. We also report experimental results, using both macro- and micro-electrodes to demonstrate the design trade-offs and ultimately validate our optimization procedures.

  20. Real-time Experiment Interface for Biological Control Applications

    PubMed Central

    Lin, Risa J.; Bettencourt, Jonathan; White, John A.; Christini, David J.; Butera, Robert J.

    2013-01-01

    The Real-time Experiment Interface (RTXI) is a fast and versatile real-time biological experimentation system based on Real-Time Linux. RTXI is open source and free, can be used with an extensive range of experimentation hardware, and can be run on Linux or Windows computers (when using the Live CD). RTXI is currently used extensively for two experiment types: dynamic patch clamp and closed-loop stimulation pattern control in neural and cardiac single cell electrophysiology. RTXI includes standard plug-ins for implementing commonly used electrophysiology protocols with synchronized stimulation, event detection, and online analysis. These and other user-contributed plug-ins can be found on the website (http://www.rtxi.org). PMID:21096883

  1. A Separate Compilation Extension to Standard ML (Revised and Expanded)

    DTIC Science & Technology

    2006-09-17

    repetition of interfaces. The language is given a formal semantics, and we argue that this semantics is implementable in a variety of compilers. This...material is based on work supported in part by the National Science Foundation under grant 0121633 Language Technology for Trustless Software...Dissemination and by the Defense Advanced Research Projects Agency under contracts F196268-95-C-0050 The Fox Project: Advanced Languages for Systems Software

  2. WIS Implementation Study Report. Volume 2. Resumes.

    DTIC Science & Technology

    1983-10-01

    WIS modernization that major attention be paid to interface definition and design, system integra- tion and test , and configuration management of the...Estimates -- Computer Corporation of America -- 155 Test Processing Systems -- Newburyport Computer Associates, Inc. -- 183 Cluster II Papers-- Standards...enhancements of the SPL/I compiler system, development of test systems for the verification of SDEX/M and the timing and architecture of the AN/U YK-20 and

  3. ECL gate array with integrated PLL-based clock recovery and synthesis for high-speed data and telecom applications

    NASA Astrophysics Data System (ADS)

    Rosky, David S.; Coy, Bruce H.; Friedmann, Marc D.

    1992-03-01

    A 2500 gate mixed signal gate array has been developed that integrates custom PLL-based clock recovery and clock synthesis functions with 2500 gates of configurable logic cells to provide a single chip solution for 200 - 1244 MHz fiber based digital interface applications. By customizing the digital logic cells, any of the popular telecom and datacom standards may be implemented.

  4. OTF CCSDS Mission Operations Prototype Parameter Service. Phase I: Exit Presentation

    NASA Technical Reports Server (NTRS)

    Reynolds, Walter F.; Lucord, Steven A.; Stevens, John E.

    2009-01-01

    This slide presentation reviews the prototype of phase 1 of the parameter service design of the CCSDS mission operations. The project goals are to: (1) Demonstrate the use of Mission Operations standards to implement the Parameter Service (2) Demonstrate interoperability between Houston MCC and a CCSDS Mission Operations compliant mission operations center (3) Utilize Mission Operations Common Architecture. THe parameter service design, interfaces, and structures are described.

  5. Constitutive Modeling of the Facesheet to Core Interface in Honeycomb Sandwich Panels Subject to Mode I Delamination

    NASA Technical Reports Server (NTRS)

    Hoewer, Daniel; Lerch, Bradley A.; Bednarcyk, Brett A.; Pineda, Evan Jorge; Reese, Stefanie; Simon, Jaan-Willem

    2017-01-01

    A new cohesive zone traction-separation law, which includes the effects of fiber bridging, has been developed, implemented with a finite element (FE) model, and applied to simulate the delamination between the facesheet and core of a composite honeycomb sandwich panel. The proposed traction-separation law includes a standard initial cohesive component, which accounts for the initial interfacial stiffness and energy release rate, along with a new component to account for the fiber bridging contribution to the delamination process. Single cantilever beam tests on aluminum honeycomb sandwich panels with carbon fiber reinforced polymer facesheets were used to characterize and evaluate the new formulation and its finite element implementation. These tests, designed to evaluate the mode I toughness of the facesheet to core interface, exhibited significant fiber bridging and large crack process zones, giving rise to a concave downward concave upward pre-peak shape in the load-displacement curve. Unlike standard cohesive formulations, the proposed formulation captures this observed shape, and its results have been shown to be in excellent quantitative agreement with experimental load-displacement and apparent critical energy release rate results, representative of a payload fairing structure, as well as local strain fields measured with digital image correlation.

  6. BladeCAD: An Interactive Geometric Design Tool for Turbomachinery Blades

    NASA Technical Reports Server (NTRS)

    Miller, Perry L., IV; Oliver, James H.; Miller, David P.; Tweedt, Daniel L.

    1996-01-01

    A new metthodology for interactive design of turbomachinery blades is presented. Software implementation of the meth- ods provides a user interface that is intuitive to aero-designers while operating with standardized geometric forms. The primary contribution is that blade sections may be defined with respect to general surfaces of revolution which may be defined to represent the path of fluid flow through the turbomachine. The completed blade design is represented as a non-uniform rational B-spline (NURBS) surface and is written to a standard IGES file which is portable to most design, analysis, and manufacturing applications.

  7. New generation of telemetry systems using CCSDS packetisation - A prototype implementation

    NASA Astrophysics Data System (ADS)

    Sotta, J. P.; Held, K.

    1988-07-01

    The system described herein was developed under ESA contract to support the introduction of new telemetry standards based on the packetized telemetry data concept. These standards were derived from recommendations in the frame of work of CCSDS, an inter-Agency committee that counts among its members most European National Agencies, ESA, NASA as well as Japanese NASDA, Indian ISRO and Brazilian INPE and having as objective to facilitate cross-support for space missions. The development is based on the present generation of ESA on-board equipment (OBDH) subsystem and is fully compatible with OBDH bus interfaces and transfer protocol.

  8. Migration of legacy mumps applications to relational database servers.

    PubMed

    O'Kane, K C

    2001-07-01

    An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.

  9. A generic, web-based clinical information system architecture using HL7 CDA: successful implementation in dermatological routine care.

    PubMed

    Schuler, Thilo; Boeker, Martin; Klar, Rüdiger; Müller, Marcel

    2007-01-01

    The requirements of highly specialized clinical domains are often underrepresented in hospital information systems (HIS). Common consequences are that documentation remains to be paper-based or external systems with insufficient HIS integration are used. This paper presents a solution to overcome this deficiency in the form of a generic framework based on the HL7 Clinical Document Architecture. The central architectural idea is the definition of customized forms using a schema-controlled XML language. These flexible form definitions drive the user interface, the data storage, and standardized data exchange. A successful proof-of-concept application in a dermatologic outpatient wound care department has been implemented, and is well accepted by the clinicians. Our work with HL7 CDA revealed the need for further practical research in the health information standards realm.

  10. The IRIS Federator: Accessing Seismological Data Across Data Centers

    NASA Astrophysics Data System (ADS)

    Trabant, C. M.; Van Fossen, M.; Ahern, T. K.; Weekly, R. T.

    2015-12-01

    In 2013 the International Federation of Digital Seismograph Networks (FDSN) approved a specification for web service interfaces for accessing seismological station metadata, time series and event parameters. Since then, a number of seismological data centers have implemented FDSN service interfaces, with more implementations in development. We have developed a new system called the IRIS Federator which leverages this standardization and provides the scientific community with a service for easy discovery and access of seismological data across FDSN data centers. These centers are located throughout the world and this work represents one model of a system for data collection across geographic and political boundaries.The main components of the IRIS Federator are a catalog of time series metadata holdings at each data center and a web service interface for searching the catalog. The service interface is designed to support client­-side federated data access, a model in which the client (software run by the user) queries the catalog and then collects the data from each identified center. By default the results are returned in a format suitable for direct submission to those web services, but could also be formatted in a simple text format for general data discovery purposes. The interface will remove any duplication of time series channels between data centers according to a set of business rules by default, however a user may request results with all duplicate time series entries included. We will demonstrate how client­-side federation is being incorporated into some of the DMC's data access tools. We anticipate further enhancement of the IRIS Federator to improve data discovery in various scenarios and to improve usefulness to communities beyond seismology.Data centers with FDSN web services: http://www.fdsn.org/webservices/The IRIS Federator query interface: http://service.iris.edu/irisws/fedcatalog/1/

  11. New Approaches for Direct Current (DC) Balanced SpaceWire

    NASA Technical Reports Server (NTRS)

    Kisin, Alex; Rakow, Glenn

    2016-01-01

    Direct Current (DC) line balanced SpaceWire is attractive for a number of reasons. Firstly, a DC line balanced interface provides the ability to isolate the physical layer with either a transformer or capacitor to achieve higher common mode voltage rejection and or the complete galvanic isolation in the case of a transformer. And secondly, it provides the possibility to reduce the number of conductors and transceivers in the classical SpaceWire interface by half by eliminating the Strobe line. Depending on the modulator scheme the clock data recovery frequency requirements may be only twice that of the transmit clock, or even match the transmit clock: depending on the Field Programmable Gate Array (FPGA) decoder design. In this paper, several different implementation scenarios will be discussed. Two of these scenarios are backward compatible with the existing SpaceWire hardware standards except for changes at the character level. Three other scenarios, while decreasing by half the standard SpaceWire hardware components, will require changes at both the character and signal levels and work with fixed rates. Other scenarios with variable data rates will require an additional SpaceWire interface handshake initialization sequence.

  12. jmzReader: A Java parser library to process and visualize multiple text and XML-based mass spectrometry data formats.

    PubMed

    Griss, Johannes; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2012-03-01

    We here present the jmzReader library: a collection of Java application programming interfaces (APIs) to parse the most commonly used peak list and XML-based mass spectrometry (MS) data formats: DTA, MS2, MGF, PKL, mzXML, mzData, and mzML (based on the already existing API jmzML). The library is optimized to be used in conjunction with mzIdentML, the recently released standard data format for reporting protein and peptide identifications, developed by the HUPO proteomics standards initiative (PSI). mzIdentML files do not contain spectra data but contain references to different kinds of external MS data files. As a key functionality, all parsers implement a common interface that supports the various methods used by mzIdentML to reference external spectra. Thus, when developing software for mzIdentML, programmers no longer have to support multiple MS data file formats but only this one interface. The library (which includes a viewer) is open source and, together with detailed documentation, can be downloaded from http://code.google.com/p/jmzreader/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Decentralized and Modular Electrical Architecture

    NASA Astrophysics Data System (ADS)

    Elisabelar, Christian; Lebaratoux, Laurence

    2014-08-01

    This paper presents the studies made on the definition and design of a decentralized and modular electrical architecture that can be used for power distribution, active thermal control (ATC), standard inputs-outputs electrical interfaces.Traditionally implemented inside central unit like OBC or RTU, these interfaces can be dispatched in the satellite by using MicroRTU.CNES propose a similar approach of MicroRTU. The system is based on a bus called BRIO (Bus Réparti des IO), which is composed, by a power bus and a RS485 digital bus. BRIO architecture is made with several miniature terminals called BTCU (BRIO Terminal Control Unit) distributed in the spacecraft.The challenge was to design and develop the BTCU with very little volume, low consumption and low cost. The standard BTCU models are developed and qualified with a configuration dedicated to ATC, while the first flight model will fly on MICROSCOPE for PYRO actuations and analogue acquisitions. The design of the BTCU is made in order to be easily adaptable for all type of electric interface needs.Extension of this concept is envisaged for power conditioning and distribution unit, and a Modular PCDU based on BRIO concept is proposed.

  14. The Vector, Signal, and Image Processing Library (VSIPL): an Open Standard for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.

    1999-12-01

    The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.

  15. Implementation of Ada protocols on Mil-STD-1553 B data bus

    NASA Technical Reports Server (NTRS)

    Ruhman, Smil; Rosemberg, Flavia

    1986-01-01

    Standardization activity of data communication in avionic systems started in 1968 for the purpose of total system integration and the elimination of heavy wire bundles carrying signals between various subassemblies. The growing complexity of avionic systems is straining the capabilities of MIL-STD-1553 B (first issued in 1973), but a much greater challenge to it is posed by Ada, the standard language adopted for real-time, computer embedded-systems. Hardware implementation of Ada communication protocols in a contention/token bus or token ring network is proposed. However, during the transition period when the current command/response multiplex data bus is still flourishing and the development environment for distributed multi-computer Ada systems is as yet lacking, a temporary accomodation of the standard language with the standard bus could be very useful and even highly desirable. By concentrating all status informtion and decisions at the bus controller, it was found to be possible to construct an elegant and efficient harware impelementation of the Ada protocols at the bus interface. This solution is discussed.

  16. Steady-State Cycle Deck Launcher Developed for Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    VanDrei, Donald E.

    1997-01-01

    One of the objectives of NASA's High Performance Computing and Communications Program's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to reduce the time and cost of generating aerothermal numerical representations of engines, called customer decks. These customer decks, which are delivered to airframe companies by various U.S. engine companies, numerically characterize an engine's performance as defined by the particular U.S. airframe manufacturer. Until recently, all numerical models were provided with a Fortran-compatible interface in compliance with the Society of Automotive Engineers (SAE) document AS681F, and data communication was performed via a standard, labeled common structure in compliance with AS681F. Recently, the SAE committee began to develop a new standard: AS681G. AS681G addresses multiple language requirements for customer decks along with alternative data communication techniques. Along with the SAE committee, the NPSS Steady-State Cycle Deck project team developed a standard Application Program Interface (API) supported by a graphical user interface. This work will result in Aerospace Recommended Practice 4868 (ARP4868). The Steady-State Cycle Deck work was validated against the Energy Efficient Engine customer deck, which is publicly available. The Energy Efficient Engine wrapper was used not only to validate ARP4868 but also to demonstrate how to wrap an existing customer deck. The graphical user interface for the Steady-State Cycle Deck facilitates the use of the new standard and makes it easier to design and analyze a customer deck. This software was developed following I. Jacobson's Object-Oriented Design methodology and is implemented in C++. The AS681G standard will establish a common generic interface for U.S. engine companies and airframe manufacturers. This will lead to more accurate cycle models, quicker model generation, and faster validation leading to specifications. The standard will facilitate cooperative work between industry and NASA. The NPSS Steady-State Cycle Deck team released a batch version of the Steady-State Cycle Deck in March 1996. Version 1.1 was released in June 1996. During fiscal 1997, NPSS accepted enhancements and modifications to the Steady-State Cycle Deck launcher. Consistent with NPSS' commercialization plan, these modifications will be done by a third party that can provide long-term software support.

  17. A Primer for Telemetry Interfacing in Accordance with NASA Standards Using Low Cost FPGAs

    NASA Astrophysics Data System (ADS)

    McCoy, Jake; Schultz, Ted; Tutt, James; Rogers, Thomas; Miles, Drew; McEntaffer, Randall

    2016-03-01

    Photon counting detector systems on sounding rocket payloads often require interfacing asynchronous outputs with a synchronously clocked telemetry (TM) stream. Though this can be handled with an on-board computer, there are several low cost alternatives including custom hardware, microcontrollers and field-programmable gate arrays (FPGAs). This paper outlines how a TM interface (TMIF) for detectors on a sounding rocket with asynchronous parallel digital output can be implemented using low cost FPGAs and minimal custom hardware. Low power consumption and high speed FPGAs are available as commercial off-the-shelf (COTS) products and can be used to develop the main component of the TMIF. Then, only a small amount of additional hardware is required for signal buffering and level translating. This paper also discusses how this system can be tested with a simulated TM chain in the small laboratory setting using FPGAs and COTS specialized data acquisition products.

  18. Overcoming the brittleness of glass through bio-inspiration and micro-architecture.

    PubMed

    Mirkhalaf, M; Dastjerdi, A Khayer; Barthelat, F

    2014-01-01

    Highly mineralized natural materials such as teeth or mollusk shells boast unusual combinations of stiffness, strength and toughness currently unmatched by engineering materials. While high mineral contents provide stiffness and hardness, these materials also contain weaker interfaces with intricate architectures, which can channel propagating cracks into toughening configurations. Here we report the implementation of these features into glass, using a laser engraving technique. Three-dimensional arrays of laser-generated microcracks can deflect and guide larger incoming cracks, following the concept of 'stamp holes'. Jigsaw-like interfaces, infiltrated with polyurethane, furthermore channel cracks into interlocking configurations and pullout mechanisms, significantly enhancing energy dissipation and toughness. Compared with standard glass, which has no microstructure and is brittle, our bio-inspired glass displays built-in mechanisms that make it more deformable and 200 times tougher. This bio-inspired approach, based on carefully architectured interfaces, provides a new pathway to toughening glasses, ceramics or other hard and brittle materials.

  19. Automating a human factors evaluation of graphical user interfaces for NASA applications: An update on CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.

    1993-01-01

    Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.

  20. Overcoming the brittleness of glass through bio-inspiration and micro-architecture

    NASA Astrophysics Data System (ADS)

    Mirkhalaf, M.; Dastjerdi, A. Khayer; Barthelat, F.

    2014-01-01

    Highly mineralized natural materials such as teeth or mollusk shells boast unusual combinations of stiffness, strength and toughness currently unmatched by engineering materials. While high mineral contents provide stiffness and hardness, these materials also contain weaker interfaces with intricate architectures, which can channel propagating cracks into toughening configurations. Here we report the implementation of these features into glass, using a laser engraving technique. Three-dimensional arrays of laser-generated microcracks can deflect and guide larger incoming cracks, following the concept of ‘stamp holes’. Jigsaw-like interfaces, infiltrated with polyurethane, furthermore channel cracks into interlocking configurations and pullout mechanisms, significantly enhancing energy dissipation and toughness. Compared with standard glass, which has no microstructure and is brittle, our bio-inspired glass displays built-in mechanisms that make it more deformable and 200 times tougher. This bio-inspired approach, based on carefully architectured interfaces, provides a new pathway to toughening glasses, ceramics or other hard and brittle materials.

  1. CAST: a new program package for the accurate characterization of large and flexible molecular systems.

    PubMed

    Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd

    2014-09-15

    The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.

  2. Numerical modeling of interface displacement in heterogeneously wetting porous media

    NASA Astrophysics Data System (ADS)

    Hiller, T.; Brinkmann, M.; Herminghaus, S.

    2013-12-01

    We use the mesoscopic particle method stochastic rotation dynamics (SRD) to simulate immiscible multi-phase flow on the pore and sub-pore scale in three dimensions. As an extension to the standard SRD method, we present an approach on implementing complex wettability on heterogeneous surfaces. We use 3D SRD to simulate immiscible two-phase flow through a model porous medium (disordered packing of spherical beads) where the substrate exhibits different spatial wetting patterns. The simulations are designed to resemble experimental measurements of capillary pressure saturation. We show that the correlation length of the wetting patterns influences the temporal evolution of the interface and thus percolation, residual saturation and work dissipated during the fluid displacement. Our numerical results are in qualitatively good agreement with the experimental data. Besides of modeling flow in porous media, our SRD implementation allows us to address various questions of interfacial dynamics, e.g. the formation of capillary bridges between spherical beads or droplets in microfluidic applications to name only a few.

  3. Design and applications of a multimodality image data warehouse framework.

    PubMed

    Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.

  4. Design and Applications of a Multimodality Image Data Warehouse Framework

    PubMed Central

    Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  5. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces.

    PubMed

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V; Boahen, Kwabena

    2013-06-01

    Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system's robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  6. The need for GPS standardization

    NASA Technical Reports Server (NTRS)

    Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine

    1992-01-01

    A desirable and necessary step for improvement of the accuracy of Global Positioning System (GPS) time comparisons is the establishment of common GPS standards. For this reason, the CCDS proposed the creation of a special group of experts with the objective of recommending procedures and models for operational time transfer by GPS common-view method. Since the announcement of the implementation of Selective Availability at the end of last spring, action has become much more urgent and this CCDS Group on GPS Time Transfer Standards has now been set up. It operates under the auspices of the permanent CCDS Working Group on TAI and works in close cooperation with the Sub-Committee on Time of the Civil GPS Service Interface Committee (CGSIC). Taking as an example the implementation of SA during the first week of July 1991, this paper illustrates the need to develop urgently at least two standardized procedures in GPS receiver software: monitoring GPS tracks with a common time scale and retaining broadcast ephemeris parameters throughout the duration of a track. Other matters requiring action are the adoption of common models for atmospheric delay, a common approach to hardware design and agreement about short-term data processing. Several examples of such deficiencies in standardization are presented.

  7. Virtual Observatory Interfaces to the Chandra Data Archive

    NASA Astrophysics Data System (ADS)

    Tibbetts, M.; Harbo, P.; Van Stone, D.; Zografou, P.

    2014-05-01

    The Chandra Data Archive (CDA) plays a central role in the operation of the Chandra X-ray Center (CXC) by providing access to Chandra data. Proprietary interfaces have been the backbone of the CDA throughout the Chandra mission. While these interfaces continue to provide the depth and breadth of mission specific access Chandra users expect, the CXC has been adding Virtual Observatory (VO) interfaces to the Chandra proposal catalog and observation catalog. VO interfaces provide standards-based access to Chandra data through simple positional queries or more complex queries using the Astronomical Data Query Language. Recent development at the CDA has generalized our existing VO services to create a suite of services that can be configured to provide VO interfaces to any dataset. This approach uses a thin web service layer for the individual VO interfaces, a middle-tier query component which is shared among the VO interfaces for parsing, scheduling, and executing queries, and existing web services for file and data access. The CXC VO services provide Simple Cone Search (SCS), Simple Image Access (SIA), and Table Access Protocol (TAP) implementations for both the Chandra proposal and observation catalogs within the existing archive architecture. Our work with the Chandra proposal and observation catalogs, as well as additional datasets beyond the CDA, illustrates how we can provide configurable VO services to extend core archive functionality.

  8. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  9. A generative tool for building health applications driven by ISO 13606 archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Martínez-Costa, Catalina; Fernández-Breis, Jesualdo Tomás

    2012-10-01

    The use of Electronic Healthcare Records (EHR) standards in the development of healthcare applications is crucial for achieving the semantic interoperability of clinical information. Advanced EHR standards make use of the dual model architecture, which provides a solution for clinical interoperability based on the separation of the information and knowledge. However, the impact of such standards is biased by the limited availability of tools that facilitate their usage and practical implementation. In this paper, we present an approach for the automatic generation of clinical applications for the ISO 13606 EHR standard, which is based on the dual model architecture. This generator has been generically designed, so it can be easily adapted to other dual model standards and can generate applications for multiple technological platforms. Such good properties are based on the combination of standards for the representation of generic user interfaces and model-driven engineering techniques.

  10. Exploring knowledge exchange at the research-policy-practice interface in children's behavioral health services.

    PubMed

    Leslie, Laurel K; Maciolek, Susan; Biebel, Kathleen; Debordes-Jackson, Gifty; Nicholson, Joanne

    2014-11-01

    This case study explored core components of knowledge exchange among researchers, policymakers, and practitioners within the context of the Rosie D. versus Romney class action lawsuit in Massachusetts and the development and implementation of its remedial plan. We identified three distinct, sequential knowledge exchange episodes with different purposes, stakeholders, and knowledge exchanged, as decision-making moved from Federal Medicaid policy to state Medicaid program standards and to community-level practice. The knowledge exchanged included research regarding Wraparound, a key component of the remedial plan, as well as contextual information critical for implementation (e.g., Federal Medicaid policy, managed care requirements, community organizations' characteristics).

  11. Mobile work platform for initial lunar base construction

    NASA Technical Reports Server (NTRS)

    Brazell, James W.; Maclaren, Brice K.; Mcmurray, Gary V.; Williams, Wendell M.

    1992-01-01

    Described is a system of equipment intended for site preparation and construction of a lunar base. The proximate era of lunar exploration and the initial phase of outpost habitation are addressed. Drilling, leveling, trenching, and cargo handling are within the scope of the system's capabilities. The centerpiece is a three-legged mobile work platform, named SKITTER. Using standard interfaces, the system is modular in nature and analogous to the farmer's tractor and implement set. Conceptually somewhat different from their Earthbound counterparts, the implements are designed to take advantage of the lunar environment as well as the capabilities of the work platform. The proposed system is mechanically simple and weight efficient.

  12. Design and implementation of a CORBA-based genome mapping system prototype.

    PubMed

    Hu, J; Mungall, C; Nicholson, D; Archibald, A L

    1998-01-01

    CORBA (Common Object Request Broker Architecture), as an open standard, is considered to be a good solution for the development and deployment of applications in distributed heterogeneous environments. This technology can be applied in the bioinformatics area to enhance utilization, management and interoperation between biological resources. This paper investigates issues in developing CORBA applications for genome mapping information systems in the Internet environment with emphasis on database connectivity and graphical user interfaces. The design and implementation of a CORBA prototype for an animal genome mapping database are described. The prototype demonstration is available via: http://www.ri.bbsrc.ac.uk/ark_corba/. jian.hu@bbsrc.ac.uk

  13. CADBIT II - Computer-Aided Design for Built-In Test. Volume 1

    DTIC Science & Technology

    1993-06-01

    data provided in the CADBIT I Final Report, as indicated in Figure 1.2. "• CADBIT II IMPLEMENTS SYSTEM CONCEPT, REQUIREMENTS, AND DATA DEVELOPED DURING...CADBIT II software was developed using de facto computer standards including Unix, C, and the X Windows-based OSF/Motif graphical user interface... export connectivity infermation. Design Architect is a package for designers that includes schematic capture, VHDL editor, and libraries of digital

  14. Atmospheric Modeling And Sensor Simulation (AMASS) study

    NASA Technical Reports Server (NTRS)

    Parker, K. G.

    1985-01-01

    A 4800 band synchronous communications link was established between the Perkin-Elmer (P-E) 3250 Atmospheric Modeling and Sensor Simulation (AMASS) system and the Cyber 205 located at the Goddard Space Flight Center. An extension study of off-the-shelf array processors offering standard interface to the Perkin-Elmer was conducted to determine which would meet computational requirements of the division. A Floating Point Systems AP-120B was borrowed from another Marshall Space Flight Center laboratory for evaluation. It was determined that available array processors did not offer significantly more capabilities than the borrowed unit, although at least three other vendors indicated that standard Perkin-Elmer interfaces would be marketed in the future. Therefore, the recommendation was made to continue to utilize the 120B ad to keep monitoring the AP market. Hardware necessary to support requirements of the ASD as well as to enhance system performance was specified and procured. Filters were implemented on the Harris/McIDAS system including two-dimensional lowpass, gradient, Laplacian, and bicubic interpolation routines.

  15. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    PubMed Central

    Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.

    2015-01-01

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402

  16. A reference architecture for integrated EHR in Colombia.

    PubMed

    de la Cruz, Edgar; Lopez, Diego M; Uribe, Gustavo; Gonzalez, Carolina; Blobel, Bernd

    2011-01-01

    The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged.

  17. ARINC 818 express for high-speed avionics video and power over coax

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Alexander, Jon

    2012-06-01

    CoaXPress is a new standard for high-speed video over coax cabling developed for the machine vision industry. CoaXPress includes both a physical layer and a video protocol. The physical layer has desirable features for aerospace and defense applications: it allows 3Gbps (up to 6Gbps) communication, includes 21Mbps return path allowing for bidirectional communication, and provides up to 13W of power, all over a single coax connection. ARINC 818, titled "Avionics Digital Video Bus" is a protocol standard developed specifically for high speed, mission critical aerospace video systems. ARINC 818 is being widely adopted for new military and commercial display and sensor applications. The ARINC 818 protocol combined with the CoaXPress physical layer provide desirable characteristics for many aerospace systems. This paper presents the results of a technology demonstration program to marry the physical layer from CoaXPress with the ARINC 818 protocol. ARINC 818 is a protocol, not a physical layer. Typically, ARINC 818 is implemented over fiber or copper for speeds of 1 to 2Gbps, but beyond 2Gbps, it has been implemented exclusively over fiber optic links. In many rugged applications, a copper interface is still desired, by implementing ARINC 818 over the CoaXPress physical layer, it provides a path to 3 and 6 Gbps copper interfaces for ARINC 818. Results of the successful technology demonstration dubbed ARINC 818 Express are presented showing 3Gbps communication while powering a remote module over a single coax cable. The paper concludes with suggested next steps for bring this technology to production readiness.

  18. Integration of Interactive Interfaces with Intelligent Tutoring Systems: An Implementation

    DTIC Science & Technology

    1993-09-01

    Intelligent Tutoring Systems: At the crossroad of artifcial intelligence and education. Ablex Publishing Corp., Norwood, NJ. 6. Goldstein, 1. L. (1986...AD-A273 869 IImhlllII Integration of Interactive Interfaces with Intelligent Thtoring Systems: An Implementation Vijay Vasandani and T. Govindaraj...NUMBERS Integration of Interactive Interfaces with Intelligent Tutoring Systems: An Implementation C: N00014-87-K-0482 .ALITHOR(S) PE: 0602233N Vijay

  19. Using hub technology to facilitate information system integration in a health-care enterprise.

    PubMed

    Gendler, S M; Friedman, B A; Henricks, W H

    1996-04-01

    The deployment and maintenance of multiple point-to-point interfaces between a clinical information system, such as a laboratory information system, and other systems within a healthcare enterprise is expensive and time consuming. Moreover, the demand for such interfaces is increasing as hospitals consolidate and clinical laboratories participate in the development of regional laboratory networks and create host-to-host links with laboratory outreach clients. An interface engine, also called a hub, is an evolving technology that could replace multiple point-to-point interfaces from a laboratory information system with a single interface to the hub, preferably HL7 based. The hub then routes and translates laboratory information to other systems within the enterprise. Changes in application systems in an enterprise where a centralized interface engine has been implemented then amount to thorough analysis, an update of the enterprise's data dictionary, purchase of a single new vendor-supported interface, and table-based parameter changes on the hub. Two other features of an interface engine, support for structured query language and information store-and-forward, will facilitate the development of clinical data repositories and provide flexibility when interacting with other host systems. This article describes the advantages and disadvantages of an interface engine and lists some problems not solved by the technology. Finally, early developmental experience with an interface engine at the University of Michigan Medical Center and the benefits of the project on system integration efforts are described, not the least of which has been the enthusiastic adoption of the HL7 standard for all future interface projects.

  20. Gradient augmented level set method for phase change simulations

    NASA Astrophysics Data System (ADS)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  1. A diffuse-interface method for two-phase flows with soluble surfactants

    PubMed Central

    Teigen, Knut Erik; Song, Peng; Lowengrub, John; Voigt, Axel

    2010-01-01

    A method is presented to solve two-phase problems involving soluble surfactants. The incompressible Navier–Stokes equations are solved along with equations for the bulk and interfacial surfactant concentrations. A non-linear equation of state is used to relate the surface tension to the interfacial surfactant concentration. The method is based on the use of a diffuse interface, which allows a simple implementation using standard finite difference or finite element techniques. Here, finite difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Results are presented for a drop in shear flow in both 2D and 3D, and the effect of solubility is discussed. PMID:21218125

  2. Parallel PAB3D: Experiences with a Prototype in MPI

    NASA Technical Reports Server (NTRS)

    Guerinoni, Fabio; Abdol-Hamid, Khaled S.; Pao, S. Paul

    1998-01-01

    PAB3D is a three-dimensional Navier Stokes solver that has gained acceptance in the research and industrial communities. It takes as computational domain, a set disjoint blocks covering the physical domain. This is the first report on the implementation of PAB3D using the Message Passing Interface (MPI), a standard for parallel processing. We discuss briefly the characteristics of tile code and define a prototype for testing. The principal data structure used for communication is derived from preprocessing "patching". We describe a simple interface (COMMSYS) for MPI communication, and some general techniques likely to be encountered when working on problems of this nature. Last, we identify levels of improvement from the current version and outline future work.

  3. PIA: An Intuitive Protein Inference Engine with a Web-Based User Interface.

    PubMed

    Uszkoreit, Julian; Maerkens, Alexandra; Perez-Riverol, Yasset; Meyer, Helmut E; Marcus, Katrin; Stephan, Christian; Kohlbacher, Oliver; Eisenacher, Martin

    2015-07-02

    Protein inference connects the peptide spectrum matches (PSMs) obtained from database search engines back to proteins, which are typically at the heart of most proteomics studies. Different search engines yield different PSMs and thus different protein lists. Analysis of results from one or multiple search engines is often hampered by different data exchange formats and lack of convenient and intuitive user interfaces. We present PIA, a flexible software suite for combining PSMs from different search engine runs and turning these into consistent results. PIA can be integrated into proteomics data analysis workflows in several ways. A user-friendly graphical user interface can be run either locally or (e.g., for larger core facilities) from a central server. For automated data processing, stand-alone tools are available. PIA implements several established protein inference algorithms and can combine results from different search engines seamlessly. On several benchmark data sets, we show that PIA can identify a larger number of proteins at the same protein FDR when compared to that using inference based on a single search engine. PIA supports the majority of established search engines and data in the mzIdentML standard format. It is implemented in Java and freely available at https://github.com/mpc-bioinformatics/pia.

  4. Reconfigurable Processing Module

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin; Hodson, Robert; Jones, Robert; Williams, John

    2005-01-01

    To accommodate a wide spectrum of applications and technologies, NASA s Exploration System's Missions Directorate has called for reconfigurable and modular technologies to support future missions to the moon and Mars. In response, Langley Research Center is leading a program entitled Reconfigurable Scaleable Computing (RSC) that is centered on the development of FPGA-based computing resources in a stackable form factor. This paper details the architecture and implementation of the Reconfigurable Processing Module (RPM), which is the key element of the RSC system. The RPM is an FPGA-based, space-qualified printed circuit assembly leveraging terrestrial/commercial design standards into the space applications domain. The form factor is similar to, and backwards compatible with, the PCI-104 standard utilizing only the PCI interface. The size is expanded to accommodate the required functionality while still better than 30% smaller than a 3U CompactPCI(TradeMark)card and without the overhead of the backplane. The architecture is built around two FPGA devices, one hosting PCI and memory interfaces, and another hosting mission application resources; both of which are connected with a high-speed data bus. The PCI interface FPGA provides access via the PCI bus to onboard SDRAM, flash PROM, and the application resources; both configuration management as well as runtime interaction. The reconfigurable FPGA, referred to as the Application FPGA - or simply "the application" - is a radiation-tolerant Xilinx Virtex-4 FX60 hosting custom application specific logic or soft microprocessor IP. The RPM implements various SEE mitigation techniques including TMR, EDAC, and configuration scrubbing of the reconfigurable FPGA. Prototype hardware and formal modeling techniques are used to explore the performability trade space. These models provide a novel way to calculate quality-of-service performance measures while simultaneously considering fault-related behavior due to SEE soft errors.

  5. DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.

    PubMed

    Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques

    2008-09-08

    Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.

  6. Integrating Genomic Resources with Electronic Health Records using the HL7 Infobutton Standard

    PubMed Central

    Overby, Casey Lynnette; Del Fiol, Guilherme; Rubinstein, Wendy S.; Maglott, Donna R.; Nelson, Tristan H.; Milosavljevic, Aleksandar; Martin, Christa L.; Goehringer, Scott R.; Freimuth, Robert R.; Williams, Marc S.

    2016-01-01

    Summary Background The Clinical Genome Resource (ClinGen) Electronic Health Record (EHR) Workgroup aims to integrate ClinGen resources with EHRs. A promising option to enable this integration is through the Health Level Seven (HL7) Infobutton Standard. EHR systems that are certified according to the US Meaningful Use program provide HL7-compliant infobutton capabilities, which can be leveraged to support clinical decision-making in genomics. Objectives To integrate genomic knowledge resources using the HL7 infobutton standard. Two tactics to achieve this objective were: (1) creating an HL7-compliant search interface for ClinGen, and (2) proposing guidance for genomic resources on achieving HL7 Infobutton standard accessibility and compliance. Methods We built a search interface utilizing OpenInfobutton, an open source reference implementation of the HL7 Infobutton standard. ClinGen resources were assessed for readiness towards HL7 compliance. Finally, based upon our experiences we provide recommendations for publishers seeking to achieve HL7 compliance. Results Eight genomic resources and two sub-resources were integrated with the ClinGen search engine via OpenInfobutton and the HL7 infobutton standard. Resources we assessed have varying levels of readiness towards HL7-compliance. Furthermore, we found that adoption of standard terminologies used by EHR systems is the main gap to achieve compliance. Conclusion Genomic resources can be integrated with EHR systems via the HL7 Infobutton standard using OpenInfobutton. Full compliance of genomic resources with the Infobutton standard would further enhance interoperability with EHR systems. PMID:27579472

  7. Curvature computation in volume-of-fluid method based on point-cloud sampling

    NASA Astrophysics Data System (ADS)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  8. USDI DCS technical support: Mississippi Test Facility

    NASA Technical Reports Server (NTRS)

    Preble, D. M.

    1975-01-01

    The objective of the technical support effort is to provide hardware and data processing support to DCS users so that application of the system may be simply and effectively implemented. Technical support at Mississippi Test Facility (MTF) is concerned primarily with on-site hardware. The first objective of the DCP hardware support was to assure that standard measuring apparatus and techniques used by the USGS could be adapted to the DCS. The second objective was to try to standardize the miscellaneous variety of parameters into a standard instrument set. The third objective was to provide the necessary accessories to simplify the use and complement the capabilities of the DCP. The standard USGS sites have been interfaced and are presently operating. These sites are stream gauge, ground water level and line operated quality of water. Evapotranspiration, meteorological and battery operated quality of water sites are planned for near future DCP operation. Three accessories which are under test or development are the Chu antenna, solar power supply and add-on memory. The DCP has proven to be relatively easy to interface with many monitors. The large antenna is awkward to install and transport. The DCS has met the original requirements well; it has and is proving that an operation, satellite-based data collection system is feasible.

  9. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  10. An accelerated forth data-acquisition system

    NASA Technical Reports Server (NTRS)

    Bowhill, S. A.; Rennier, A. D.

    1986-01-01

    A new data acquisition system was put into operation at Urbana in August 1984. It uses a standard Apple 2 microcomputer with 48 k RAM and a standard 5 1/4 inch floppy disk. Design criteria for the system is given. The system was implemented using fig-FORTH, a threaded interpretive language which permits easy interfacing to machine code. The throughput of this system is better by a factor of 6 than the PDP-15 minicomputer system previously used, and it has the real time display feature and provides the data in much more convenient form. The features which contribute to this improved performance is listed.

  11. Implementation and Evaluation of Four Interoperable Open Standards for the Internet of Things

    PubMed Central

    Jazayeri, Mohammad Ali; Liang, Steve H. L.; Huang, Chih-Yuan

    2015-01-01

    Recently, researchers are focusing on a new use of the Internet called the Internet of Things (IoT), in which enabled electronic devices can be remotely accessed over the Internet. As the realization of IoT concept is still in its early stages, manufacturers of Internet-connected devices and IoT web service providers are defining their proprietary protocols based on their targeted applications. Consequently, IoT becomes heterogeneous in terms of hardware capabilities and communication protocols. Addressing these heterogeneities by following open standards is a necessary step to communicate with various IoT devices. In this research, we assess the feasibility of applying existing open standards on resource-constrained IoT devices. The standard protocols developed in this research are OGC PUCK over Bluetooth, TinySOS, SOS over CoAP, and OGC SensorThings API. We believe that by hosting open standard protocols on IoT devices, not only do the devices become self-describable, self-contained, and interoperable, but innovative applications can also be easily developed with standardized interfaces. In addition, we use memory consumption, request message size, response message size, and response latency to benchmark the efficiency of the implemented protocols. In all, this research presents and evaluates standard-based solutions to better understand the feasibility of applying existing standards to the IoT vision. PMID:26402683

  12. Open source data assimilation framework for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent processes from a different domain or have different spatial and temporal resolutions. An open source framework that bridges OpenMI and OpenDA is presented. The framework provides a generic and easy means for any OpenMI compliant model to assimilate observation measurements. An example test case will be presented using MikeSHE, and OpenMI compliant fully coupled integrated hydrological model that can accurately simulate the feedback dynamics of overland flow, unsaturated zone and saturated zone.

  13. The domain interface method: a general-purpose non-intrusive technique for non-conforming domain decomposition problems

    NASA Astrophysics Data System (ADS)

    Cafiero, M.; Lloberas-Valls, O.; Cante, J.; Oliver, J.

    2016-04-01

    A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.

  14. Efficient generation of connectivity in neuronal networks from simulator-independent descriptions

    PubMed Central

    Djurfeldt, Mikael; Davison, Andrew P.; Eppler, Jochen M.

    2014-01-01

    Simulator-independent descriptions of connectivity in neuronal networks promise greater ease of model sharing, improved reproducibility of simulation results, and reduced programming effort for computational neuroscientists. However, until now, enabling the use of such descriptions in a given simulator in a computationally efficient way has entailed considerable work for simulator developers, which must be repeated for each new connectivity-generating library that is developed. We have developed a generic connection generator interface that provides a standard way to connect a connectivity-generating library to a simulator, such that one library can easily be replaced by another, according to the modeler's needs. We have used the connection generator interface to connect C++ and Python implementations of the previously described connection-set algebra to the NEST simulator. We also demonstrate how the simulator-independent modeling framework PyNN can transparently take advantage of this, passing a connection description through to the simulator layer for rapid processing in C++ where a simulator supports the connection generator interface and falling-back to slower iteration in Python otherwise. A set of benchmarks demonstrates the good performance of the interface. PMID:24795620

  15. Standard interface: Twin-coaxial converter

    NASA Technical Reports Server (NTRS)

    Lushbaugh, W. A.

    1976-01-01

    The network operations control center standard interface has been adopted as a standard computer interface for all future minicomputer based subsystem development for the Deep Space Network. Discussed is an intercomputer communications link using a pair of coaxial cables. This unit is capable of transmitting and receiving digital information at distances up to 600 m with complete ground isolation between the communicating devices. A converter is described that allows a computer equipped with the standard interface to use the twin coaxial link.

  16. A contact layer element for large deformations

    NASA Astrophysics Data System (ADS)

    Weißenfels, C.; Wriggers, P.

    2015-05-01

    In many contact situations the material behavior of one contact member strongly influences the force acting between the two bodies. Unfortunately standard friction models cannot reproduce all of these material effects at the contact layer and often continuum interface elements are used instead. These elements are intrinsically tied to the fixed grid and hence cannot be used in large sliding simulations. Due to the shortcomings of the standard contact formulations and of the interface elements a new type of a contact layer element is developed in this work. The advantages of this element are the direct implementation of continuum models into the contact formulation and the application to arbitrary large deformations. Showing a relation between continuum and contact kinematics based on the solid-shell concept the new contact element is at the end a natural extension of the standard contact formulations into 3D. Two examples show that the continuum behavior can be exactly reproduced at the contact surface even in large sliding situations using this contact layer element. For the discretization of the new contact element the Mortar method is chosen exemplary, but it can be combined with all kinds of contact formulations.

  17. Use of a Microprocessor to Implement an ADCCP Protocol (Federal Std-1003) Operating in the Unbalanced Normal Mode.

    DTIC Science & Technology

    1980-05-01

    andcoptrpormigfrteublne nra ls fpoeue nacrac with Federal Standard 1003 fTelecommunications: Synchronous Bit Oriented Data Link Control Procedures...and the higher level user. The solution to the producer/consumer problem involves the use of PASS and SICHAL primitives and event variables or... semaphores . The event variables have been defined for the LS-microprocessor interface as part of I-1 the internal registers that are included in the F6856

  18. NASA Docking System (NDS) Technical Integration Meeting

    NASA Technical Reports Server (NTRS)

    Lewis, James L.

    2010-01-01

    This slide presentation reviews the NASA Docking System (NDS) as NASA's implementation of the International Docking System Standard (IDSS). The goals of the NDS, is to build on proven technologies previously demonstrated in flight and to advance the state of the art of docking systems by incorporating Low Impact Docking System (LIDS) technology into the NDS. A Hardware Demonstration was included in the meeting, and there was discussion about software, NDS major system interfaces, integration information, schedule, and future upgrades.

  19. Graphics processing unit accelerated phase field dislocation dynamics: Application to bi-metallic interfaces

    DOE PAGES

    Eghtesad, Adnan; Germaschewski, Kai; Beyerlein, Irene J.; ...

    2017-10-14

    We present the first high-performance computing implementation of the meso-scale phase field dislocation dynamics (PFDD) model on a graphics processing unit (GPU)-based platform. The implementation takes advantage of the portable OpenACC standard directive pragmas along with Nvidia's compute unified device architecture (CUDA) fast Fourier transform (FFT) library called CUFFT to execute the FFT computations within the PFDD formulation on the same GPU platform. The overall implementation is termed ACCPFDD-CUFFT. The package is entirely performance portable due to the use of OPENACC-CUDA inter-operability, in which calls to CUDA functions are replaced with the OPENACC data regions for a host central processingmore » unit (CPU) and device (GPU). A comprehensive benchmark study has been conducted, which compares a number of FFT routines, the Numerical Recipes FFT (FOURN), Fastest Fourier Transform in the West (FFTW), and the CUFFT. The last one exploits the advantages of the GPU hardware for FFT calculations. The novel ACCPFDD-CUFFT implementation is verified using the analytical solutions for the stress field around an infinite edge dislocation and subsequently applied to simulate the interaction and motion of dislocations through a bi-phase copper-nickel (Cu–Ni) interface. It is demonstrated that the ACCPFDD-CUFFT implementation on a single TESLA K80 GPU offers a 27.6X speedup relative to the serial version and a 5X speedup relative to the 22-multicore Intel Xeon CPU E5-2699 v4 @ 2.20 GHz version of the code.« less

  20. Graphics processing unit accelerated phase field dislocation dynamics: Application to bi-metallic interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eghtesad, Adnan; Germaschewski, Kai; Beyerlein, Irene J.

    We present the first high-performance computing implementation of the meso-scale phase field dislocation dynamics (PFDD) model on a graphics processing unit (GPU)-based platform. The implementation takes advantage of the portable OpenACC standard directive pragmas along with Nvidia's compute unified device architecture (CUDA) fast Fourier transform (FFT) library called CUFFT to execute the FFT computations within the PFDD formulation on the same GPU platform. The overall implementation is termed ACCPFDD-CUFFT. The package is entirely performance portable due to the use of OPENACC-CUDA inter-operability, in which calls to CUDA functions are replaced with the OPENACC data regions for a host central processingmore » unit (CPU) and device (GPU). A comprehensive benchmark study has been conducted, which compares a number of FFT routines, the Numerical Recipes FFT (FOURN), Fastest Fourier Transform in the West (FFTW), and the CUFFT. The last one exploits the advantages of the GPU hardware for FFT calculations. The novel ACCPFDD-CUFFT implementation is verified using the analytical solutions for the stress field around an infinite edge dislocation and subsequently applied to simulate the interaction and motion of dislocations through a bi-phase copper-nickel (Cu–Ni) interface. It is demonstrated that the ACCPFDD-CUFFT implementation on a single TESLA K80 GPU offers a 27.6X speedup relative to the serial version and a 5X speedup relative to the 22-multicore Intel Xeon CPU E5-2699 v4 @ 2.20 GHz version of the code.« less

  1. E-SMART system for in-situ detection of environmental contaminants. Quarterly technical progress report, July--September 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-10-01

    General Atomics (GA) leads a team of industrial, academic, and government organizations to develop the Environmental Systems Management, Analysis and Reporting neTwork (E-SMART) for the Defense Advanced Research Project Agency (DARPA), by way of this Technology Reinvestment Project (TRP). E-SMART defines a standard by which networks of smart sensing, sampling, and control devices can interoperate. E-SMART is intended to be an open standard, available to any equipment manufacturer. The user will be provided a standard platform on which a site-specific monitoring plan can be implemented using sensors and actuators from various manufacturers and upgraded as new monitoring devices become commerciallymore » available. This project will further develop and advance the E-SMART standardized network protocol to include new sensors, sampling systems, and graphical user interfaces.« less

  2. Design and implementation of interface units for high speed fiber optics local area networks and broadband integrated services digital networks

    NASA Technical Reports Server (NTRS)

    Tobagi, Fouad A.; Dalgic, Ismail; Pang, Joseph

    1990-01-01

    The design and implementation of interface units for high speed Fiber Optic Local Area Networks and Broadband Integrated Services Digital Networks are discussed. During the last years, a number of network adapters that are designed to support high speed communications have emerged. This approach to the design of a high speed network interface unit was to implement package processing functions in hardware, using VLSI technology. The VLSI hardware implementation of a buffer management unit, which is required in such architectures, is described.

  3. Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.

    1984-01-01

    SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.

  4. Mass storage system reference model, Version 4

    NASA Technical Reports Server (NTRS)

    Coleman, Sam (Editor); Miller, Steve (Editor)

    1993-01-01

    The high-level abstractions that underlie modern storage systems are identified. The information to generate the model was collected from major practitioners who have built and operated large storage facilities, and represents a distillation of the wisdom they have acquired over the years. The model provides a common terminology and set of concepts to allow existing systems to be examined and new systems to be discussed and built. It is intended that the model and the interfaces identified from it will allow and encourage vendors to develop mutually-compatible storage components that can be combined to form integrated storage systems and services. The reference model presents an abstract view of the concepts and organization of storage systems. From this abstraction will come the identification of the interfaces and modules that will be used in IEEE storage system standards. The model is not yet suitable as a standard; it does not contain implementation decisions, such as how abstract objects should be broken up into software modules or how software modules should be mapped to hosts; it does not give policy specifications, such as when files should be migrated; does not describe how the abstract objects should be used or connected; and does not refer to specific hardware components. In particular, it does not fully specify the interfaces.

  5. Oceanids command and control (C2) data system - Marine autonomous systems data for vehicle piloting, scientific data users, operational data assimilation, and big data

    NASA Astrophysics Data System (ADS)

    Buck, J. J. H.; Phillips, A.; Lorenzo, A.; Kokkinaki, A.; Hearn, M.; Gardner, T.; Thorne, K.

    2017-12-01

    The National Oceanography Centre (NOC) operate a fleet of approximately 36 autonomous marine platforms including submarine gliders, autonomous underwater vehicles, and autonomous surface vehicles. Each platform effectivity has the capability to observe the ocean and collect data akin to a small research vessel. This is creating a growth in data volumes and complexity while the amount of resource available to manage data remains static. The OceanIds Command and Control (C2) project aims to solve these issues by fully automating the data archival, processing and dissemination. The data architecture being implemented jointly by NOC and the Scottish Association for Marine Science (SAMS) includes a single Application Programming Interface (API) gateway to handle authentication, forwarding and delivery of both metadata and data. Technicians and principle investigators will enter expedition data prior to deployment of vehicles enabling automated data processing when vehicles are deployed. The system will support automated metadata acquisition from platforms as this technology moves towards operational implementation. The metadata exposure to the web builds on a prototype developed by the European Commission supported SenseOCEAN project and is via open standards including World Wide Web Consortium (W3C) RDF/XML and the use of the Semantic Sensor Network ontology and Open Geospatial Consortium (OGC) SensorML standard. Data will be delivered in the marine domain Everyone's Glider Observatory (EGO) format and OGC Observations and Measurements. Additional formats will be served by implementation of endpoints such as the NOAA ERDDAP tool. This standardised data delivery via the API gateway enables timely near-real-time data to be served to Oceanids users, BODC users, operational users and big data systems. The use of open standards will also enable web interfaces to be rapidly built on the API gateway and delivery to European research infrastructures that include aligned reference models for data infrastructure.

  6. Cooperative Data Sharing: Simple Support for Clusters of SMP Nodes

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Balley, David H. (Technical Monitor)

    1997-01-01

    Libraries like PVM and MPI send typed messages to allow for heterogeneous cluster computing. Lower-level libraries, such as GAM, provide more efficient access to communication by removing the need to copy messages between the interface and user space in some cases. still lower-level interfaces, such as UNET, get right down to the hardware level to provide maximum performance. However, these are all still interfaces for passing messages from one process to another, and have limited utility in a shared-memory environment, due primarily to the fact that message passing is just another term for copying. This drawback is made more pertinent by today's hybrid architectures (e.g. clusters of SMPs), where it is difficult to know beforehand whether two communicating processes will share memory. As a result, even portable language tools (like HPF compilers) must either map all interprocess communication, into message passing with the accompanying performance degradation in shared memory environments, or they must check each communication at run-time and implement the shared-memory case separately for efficiency. Cooperative Data Sharing (CDS) is a single user-level API which abstracts all communication between processes into the sharing and access coordination of memory regions, in a model which might be described as "distributed shared messages" or "large-grain distributed shared memory". As a result, the user programs to a simple latency-tolerant abstract communication specification which can be mapped efficiently to either a shared-memory or message-passing based run-time system, depending upon the available architecture. Unlike some distributed shared memory interfaces, the user still has complete control over the assignment of data to processors, the forwarding of data to its next likely destination, and the queuing of data until it is needed, so even the relatively high latency present in clusters can be accomodated. CDS does not require special use of an MMU, which can add overhead to some DSM systems, and does not require an SPMD programming model. unlike some message-passing interfaces, CDS allows the user to implement efficient demand-driven applications where processes must "fight" over data, and does not perform copying if processes share memory and do not attempt concurrent writes. CDS also supports heterogeneous computing, dynamic process creation, handlers, and a very simple thread-arbitration mechanism. Additional support for array subsections is currently being considered. The CDS1 API, which forms the kernel of CDS, is built primarily upon only 2 communication primitives, one process initiation primitive, and some data translation (and marshalling) routines, memory allocation routines, and priority control routines. The entire current collection of 28 routines provides enough functionality to implement most (or all) of MPI 1 and 2, which has a much larger interface consisting of hundreds of routines. still, the API is small enough to consider integrating into standard os interfaces for handling inter-process communication in a network-independent way. This approach would also help to solve many of the problems plaguing other higher-level standards such as MPI and PVM which must, in some cases, "play OS" to adequately address progress and process control issues. The CDS2 API, a higher level of interface roughly equivalent in functionality to MPI and to be built entirely upon CDS1, is still being designed. It is intended to add support for the equivalent of communicators, reduction and other collective operations, process topologies, additional support for process creation, and some automatic memory management. CDS2 will not exactly match MPI, because the copy-free semantics of communication from CDS1 will be supported. CDS2 application programs will be free to carefully also use CDS1. CDS1 has been implemented on networks of workstations running unmodified Unix-based operating systems, using UDP/IP and vendor-supplied high- performance locks. Although its inter-node performance is currently unimpressive due to rudimentary implementation technique, it even now outperforms highly-optimized MPI implementation on intra-node communication due to its support for non-copy communication. The similarity of the CDS1 architecture to that of other projects such as UNET and TRAP suggests that the inter-node performance can be increased significantly to surpass MPI or PVM, and it may be possible to migrate some of its functionality to communication controllers.

  7. User perception and experience of the introduction of a novel critical care patient viewer in the ICU setting.

    PubMed

    Dziadzko, Mikhail A; Herasevich, Vitaly; Sen, Ayan; Pickering, Brian W; Knight, Ann-Marie A; Moreno Franco, Pablo

    2016-04-01

    Failure to rapidly identify high-value information due to inappropriate output may alter user acceptance and satisfaction. The information needs for different intensive care unit (ICU) providers are not the same. This can obstruct successful implementation of electronic medical record (EMR) systems. We evaluated the implementation experience and satisfaction of providers using a novel EMR interface-based on the information needs of ICU providers-in the context of an existing EMR system. This before-after study was performed in the ICU setting at two tertiary care hospitals from October 2013 through November 2014. Surveys were delivered to ICU providers before and after implementation of the novel EMR interface. Overall satisfaction and acceptance was reported for both interfaces. A total of 246 before (existing EMR) and 115 after (existing EMR+novel EMR interface) surveys were analyzed. 14% of respondents were prescribers and 86% were non-prescribers. Non-prescribers were more satisfied with the existing EMR, whereas prescribers were more satisfied with the novel EMR interface. Both groups reported easier data gathering, routine tasks & rounding, and fostering of team work with the novel EMR interface. This interface was the primary tool for 18% of respondents after implementation and 73% of respondents intended to use it further. Non-prescribers reported an intention to use this novel interface as their primary tool for information gathering. Compliance and acceptance of new system is not related to previous duration of work in ICU, but ameliorates with the length of EMR interface usage. Task-specific and role-specific considerations are necessary for design and successful implementation of a EMR interface. The difference in user workflows causes disparity of the way of EMR data usage. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.; Boahen, Kwabena

    2013-06-01

    Objective. Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. Approach. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Main results. Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system’s robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. Significance. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  9. The tsunami service bus, an integration platform for heterogeneous sensor systems

    NASA Astrophysics Data System (ADS)

    Haener, R.; Waechter, J.; Kriegel, U.; Fleischer, J.; Mueller, S.

    2009-04-01

    1. INTRODUCTION Early warning systems are long living and evolving: New sensor-systems and -types may be developed and deployed, sensors will be replaced or redeployed on other locations and the functionality of analyzing software will be improved. To ensure a continuous operability of those systems their architecture must be evolution-enabled. From a computer science point of view an evolution-enabled architecture must fulfill following criteria: • Encapsulation of and functionality on data in standardized services. Access to proprietary sensor data is only possible via these services. • Loose coupling of system constituents which easily can be achieved by implementing standardized interfaces. • Location transparency of services what means that services can be provided everywhere. • Separation of concerns that means breaking a system into distinct features which overlap in functionality as little as possible. A Service Oriented Architecture (SOA) as e. g. realized in the German Indonesian Tsunami Early Warning System (GITEWS) and the advantages of functional integration on the basis of services described below adopt these criteria best. 2. SENSOR INTEGRATION Integration of data from (distributed) data sources is just a standard task in computer science. From few well known solution patterns, taking into account performance and security requirements of early warning systems only functional integration should be considered. Precondition for this is that systems are realized compliant to SOA patterns. Functionality is realized in form of dedicated components communicating via a service infrastructure. These components provide their functionality in form of services via standardized and published interfaces which could be used to access data maintained in - and functionality provided by dedicated components. Functional integration replaces the tight coupling at data level by a dependency on loosely coupled services. If the interfaces of the service providing components remain unchanged, components can be maintained and evolved independently on each other and service functionality as a whole can be reused. In GITEWS the functional integration pattern was adopted by applying the principles of an Enterprise Service Bus (ESB) as a backbone. Four services provided by the so called Tsunami Service Bus (TSB) which are essential for early warning systems are realized compliant to services specified within the Sensor Web Enablement (SWE) initiative of the Open Geospatial Consortium (OGC). 3. ARCHITECTURE The integration platform was developed to access proprietary, heterogeneous sensor data and to provide them in a uniform manner for further use. Its core, the TSB provides both a messaging-backbone and -interfaces on the basis of a Java Messaging Service (JMS). The logical architecture of GITEWS consists of four independent layers: • A resource layer where physical or virtual sensors as well as data or model storages provide relevant measurement-, event- and analysis-data: Utilizable for the TSB are any kind of data. In addition to sensors databases, model data and processing applications are adopted. SWE specifies encoding both to access and to describe these data in a comprehensive way: 1. Sensor Model Language (SensorML): Standardized description of sensors and sensor data 2. Observations and Measurements (O&M): Model and encoding of sensor measurements • A service layer to collect and conduct data from heterogeneous and proprietary resources and provide them via standardized interfaces: The TSB enables interaction with sensors via the following services: 1. Sensor Observation Service (SOS): Standardized access to sensor data 2. Sensor Planning Service (SPS): Controlling of sensors and sensor networks 3. Sensor Alert Service (SAS): Active sending of data if defined events occur 4. Web Notification Service (WNS): Conduction of asynchronous dialogues between services • An orchestration layer where atomic services are composed and arranged to high level processes like a decision support process: One of the outstanding features of service-oriented architectures is the possibility to compose new services from existing ones, which can be done programmatically or via declaration (workflow or process design). This allows e. g. the definition of new warning processes which could be adapted easily to new requirements. • An access layer which may contain graphical user interfaces for decision support, monitoring- or visualization-systems: To for example visualize time series graphical user interfaces request sensor data simply via the SOS. 4.BENEFIT The integration platform is realized on top of well known and widely used open source software implementing industrial standards. New sensors could be added easily to the infrastructure. Client components don't need to be adjusted if new sensor-types or -individuals are added to the system, because they access the sensors via standardized services. With implementing SWE fully compatible to the OGC specification it is possible to establish the "detection" and integration of sensors via the Web. Thus realizing a system of systems that combines early warning system functionality at different levels of detail (distant early warning systems, monitoring systems and any sensor system) is feasible.

  10. SKITTER/implement mechanical interface

    NASA Technical Reports Server (NTRS)

    Cash, John Wilson, III; Cone, Alan E.; Garolera, Frank J.; German, David; Lindabury, David Peter; Luckado, Marshall Cleveland; Murphey, Craig; Rowell, John Bryan; Wilkinson, Brad

    1988-01-01

    SKITTER (Spacial Kinematic Inertial Translatory Tripod Extremity Robot) is a three-legged transport vehicle designed to perform under the unique environment of the moon. The objective of this project was to design a mechanical interface for SKITTER. This mechanical latching interface will allow SKITTER to use a series of implements such as drills, cranes, etc., and perform different tasks on the moon. The design emphasized versatility and detachability; that is, the interface design is the same for all implements, and connection and detachment is simple. After consideration of many alternatives, a system of three identical latches at each of the three interface points was chosen. The latching mechanism satisfies the design constraints because it facilitates connection and detachment. Also, the moving parts are protected from the dusty environment by housing plates.

  11. Diamond turning machine controller implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrard, K.P.; Taylor, L.W.; Knight, B.F.

    The standard controller for a Pnuemo ASG 2500 Diamond Turning Machine, an Allen Bradley 8200, has been replaced with a custom high-performance design. This controller consists of four major components. Axis position feedback information is provided by a Zygo Axiom 2/20 laser interferometer with 0.1 micro-inch resolution. Hardware interface logic couples the computers digital and analog I/O channels to the diamond turning machine`s analog motor controllers, the laser interferometer, and other machine status and control information. It also provides front panel switches for operator override of the computer controller and implement the emergency stop sequence. The remaining two components, themore » control computer hardware and software, are discussed in detail below.« less

  12. Applying representational state transfer (REST) architecture to archetype-based electronic health record systems

    PubMed Central

    2013-01-01

    Background The openEHR project and the closely related ISO 13606 standard have defined structures supporting the content of Electronic Health Records (EHRs). However, there is not yet any finalized openEHR specification of a service interface to aid application developers in creating, accessing, and storing the EHR content. The aim of this paper is to explore how the Representational State Transfer (REST) architectural style can be used as a basis for a platform-independent, HTTP-based openEHR service interface. Associated benefits and tradeoffs of such a design are also explored. Results The main contribution is the formalization of the openEHR storage, retrieval, and version-handling semantics and related services into an implementable HTTP-based service interface. The modular design makes it possible to prototype, test, replicate, distribute, cache, and load-balance the system using ordinary web technology. Other contributions are approaches to query and retrieval of the EHR content that takes caching, logging, and distribution into account. Triggering on EHR change events is also explored. A final contribution is an open source openEHR implementation using the above-mentioned approaches to create LiU EEE, an educational EHR environment intended to help newcomers and developers experiment with and learn about the archetype-based EHR approach and enable rapid prototyping. Conclusions Using REST addressed many architectural concerns in a successful way, but an additional messaging component was needed to address some architectural aspects. Many of our approaches are likely of value to other archetype-based EHR implementations and may contribute to associated service model specifications. PMID:23656624

  13. Applying representational state transfer (REST) architecture to archetype-based electronic health record systems.

    PubMed

    Sundvall, Erik; Nyström, Mikael; Karlsson, Daniel; Eneling, Martin; Chen, Rong; Örman, Håkan

    2013-05-09

    The openEHR project and the closely related ISO 13606 standard have defined structures supporting the content of Electronic Health Records (EHRs). However, there is not yet any finalized openEHR specification of a service interface to aid application developers in creating, accessing, and storing the EHR content.The aim of this paper is to explore how the Representational State Transfer (REST) architectural style can be used as a basis for a platform-independent, HTTP-based openEHR service interface. Associated benefits and tradeoffs of such a design are also explored. The main contribution is the formalization of the openEHR storage, retrieval, and version-handling semantics and related services into an implementable HTTP-based service interface. The modular design makes it possible to prototype, test, replicate, distribute, cache, and load-balance the system using ordinary web technology. Other contributions are approaches to query and retrieval of the EHR content that takes caching, logging, and distribution into account. Triggering on EHR change events is also explored.A final contribution is an open source openEHR implementation using the above-mentioned approaches to create LiU EEE, an educational EHR environment intended to help newcomers and developers experiment with and learn about the archetype-based EHR approach and enable rapid prototyping. Using REST addressed many architectural concerns in a successful way, but an additional messaging component was needed to address some architectural aspects. Many of our approaches are likely of value to other archetype-based EHR implementations and may contribute to associated service model specifications.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welcome, Michael L.; Bell, Christian S.

    GASNet (Global-Address Space Networking) is a language-independent, low-level networking layer that provides network-independent, high-performance communication primitives tailored for implementing parallel global address space SPMD languages such as UPC and Titanium. The interface is primarily intended as a compilation target and for use by runtime library writers (as opposed to end users), and the primary goals are high performance, interface portability, and expressiveness. GASNet is designed specifically to support high-performance, portable implementations of global address space languages on modern high-end communication networks. The interface provides the flexibility and extensibility required to express a wide variety of communication patterns without sacrificing performancemore » by imposing large computational overheads in the interface. The design of the GASNet interface is partitioned into two layers to maximize porting ease without sacrificing performance: the lower level is a narrow but very general interface called the GASNet core API - the design is basedheavily on Active Messages, and is implemented directly on top of each individual network architecture. The upper level is a wider and more expressive interface called GASNet extended API, which provides high-level operations such as remote memory access and various collective operations. This release implements GASNet over MPI, the Quadrics "elan" API, the Myrinet "GM" API and the "LAPI" interface to the IBM SP switch. A template is provided for adding support for additional network interfaces.« less

  15. GIS Technologies For The New Planetary Science Archive (PSA)

    NASA Astrophysics Data System (ADS)

    Docasal, R.; Barbarisi, I.; Rios, C.; Macfarlane, A. J.; Gonzalez, J.; Arviset, C.; De Marchi, G.; Martinez, S.; Grotheer, E.; Lim, T.; Besse, S.; Heather, D.; Fraga, D.; Barthelemy, M.

    2015-12-01

    Geographical information system (GIS) is becoming increasingly used for planetary science. GIS are computerised systems for the storage, retrieval, manipulation, analysis, and display of geographically referenced data. Some data stored in the Planetary Science Archive (PSA), for instance, a set of Mars Express/Venus Express data, have spatial metadata associated to them. To facilitate users in handling and visualising spatial data in GIS applications, the new PSA should support interoperability with interfaces implementing the standards approved by the Open Geospatial Consortium (OGC). These standards are followed in order to develop open interfaces and encodings that allow data to be exchanged with GIS Client Applications, well-known examples of which are Google Earth and NASA World Wind as well as open source tools such as Openlayers. The technology already exists within PostgreSQL databases to store searchable geometrical data in the form of the PostGIS extension. An existing open source maps server is GeoServer, an instance of which has been deployed for the new PSA, uses the OGC standards to allow, among others, the sharing, processing and editing of data and spatial data through the Web Feature Service (WFS) standard as well as serving georeferenced map images through the Web Map Service (WMS). The final goal of the new PSA, being developed by the European Space Astronomy Centre (ESAC) Science Data Centre (ESDC), is to create an archive which enables science exploitation of ESA's planetary missions datasets. This can be facilitated through the GIS framework, offering interfaces (both web GUI and scriptable APIs) that can be used more easily and scientifically by the community, and that will also enable the community to build added value services on top of the PSA.

  16. ACR/NEMA Digital Image Interface Standard (An Illustrated Protocol Overview)

    NASA Astrophysics Data System (ADS)

    Lawrence, G. Robert

    1985-09-01

    The American College of Radiologists (ACR) and the National Electrical Manufacturers Association (NEMA) have sponsored a joint standards committee mandated to develop a universal interface standard for the transfer of radiology images among a variety of PACS imaging devicesl. The resulting standard interface conforms to the ISO/OSI standard reference model for network protocol layering. The standard interface specifies the lower layers of the reference model (Physical, Data Link, Transport and Session) and implies a requirement of the Network Layer should a requirement for a network exist. The message content has been considered and a flexible message and image format specified. The following Imaging Equipment modalities are supported by the standard interface... CT Computed Tomograpy DS Digital Subtraction NM Nuclear Medicine US Ultrasound MR Magnetic Resonance DR Digital Radiology The following data types are standardized over the transmission interface media.... IMAGE DATA DIGITIZED VOICE HEADER DATA RAW DATA TEXT REPORTS GRAPHICS OTHERS This paper consists of text supporting the illustrated protocol data flow. Each layer will be individually treated. Particular emphasis will be given to the Data Link layer (Frames) and the Transport layer (Packets). The discussion utilizes a finite state sequential machine model for the protocol layers.

  17. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    NASA Astrophysics Data System (ADS)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  18. X-Windows Information Sharing Protocol Widget Class

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.

    2006-01-01

    The X-Windows Information Sharing Protocol (ISP) Widget Class ("Class") is used here in the object-oriented-programming sense of the word) was devised to simplify the task of implementing ISP graphical-user-interface (GUI) computer programs. ISP programming tasks require many method calls to identify, query, and interpret the connections and messages exchanged between a client and an ISP server. Most X-Windows GUI programs use widget sets or toolkits to facilitate management of complex objects. The widget standards facilitate construction of toolkits and application programs. The X-Windows Information Sharing Protocol (ISP) Widget Class encapsulates the client side of the ISP programming libraries within the framework of an X-Windows widget. Using the widget framework, X-Windows GUI programs can interact with ISP services in an abstract way and in the same manner as that of other graphical widgets, making it easier to write ISP GUI client programs. Wrapping ISP client services inside a widget framework enables a programmer to treat an ISP server interface as though it were a GUI. Moreover, an alternate subclass could implement another communication protocol in the same sort of widget.

  19. The MeSH translation maintenance system: structure, interface design, and implementation.

    PubMed

    Nelson, Stuart J; Schopen, Michael; Savage, Allan G; Schulman, Jacque-Lynne; Arluk, Natalie

    2004-01-01

    The National Library of Medicine (NLM) produces annual editions of the Medical Subject Headings (MeSH). Translations of MeSH are often done to make the vocabulary useful for non-English users. However, MeSH translators have encountered difficulties with entry vocabulary as they maintain and update their translation. Tracking MeSH changes and updating their translations in a reasonable time frame is cumbersome. NLM has developed and implemented a concept-centered vocabulary maintenance system for MeSH. This system has been extended to create an interlingual database of translations, the MeSH Translation Maintenance System (MTMS). This database allows continual updating of the translations, as well as facilitating tracking of the changes within MeSH from one year to another. The MTMS interface uses a Web-based design with multiple colors and fonts to indicate concepts needing translation or review. Concepts for which there is no exact English equivalent can be added. The system software encourages compliance with the Unicode standard in order to ensure that character sets with native alphabets and full orthography are used consistently.

  20. Hard real-time closed-loop electrophysiology with the Real-Time eXperiment Interface (RTXI)

    PubMed Central

    George, Ansel; Dorval, Alan D.; Christini, David J.

    2017-01-01

    The ability to experimentally perturb biological systems has traditionally been limited to static pre-programmed or operator-controlled protocols. In contrast, real-time control allows dynamic probing of biological systems with perturbations that are computed on-the-fly during experimentation. Real-time control applications for biological research are available; however, these systems are costly and often restrict the flexibility and customization of experimental protocols. The Real-Time eXperiment Interface (RTXI) is an open source software platform for achieving hard real-time data acquisition and closed-loop control in biological experiments while retaining the flexibility needed for experimental settings. RTXI has enabled users to implement complex custom closed-loop protocols in single cell, cell network, animal, and human electrophysiology studies. RTXI is also used as a free and open source, customizable electrophysiology platform in open-loop studies requiring online data acquisition, processing, and visualization. RTXI is easy to install, can be used with an extensive range of external experimentation and data acquisition hardware, and includes standard modules for implementing common electrophysiology protocols. PMID:28557998

  1. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  2. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  3. Higher order QCD predictions for associated Higgs production with anomalous couplings to gauge bosons

    NASA Astrophysics Data System (ADS)

    Mimasu, Ken; Sanz, Verónica; Williams, Ciaran

    2016-08-01

    We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.

  4. Bridging the gap between Hydrologic and Atmospheric communities through a standard based framework

    NASA Astrophysics Data System (ADS)

    Boldrini, E.; Salas, F.; Maidment, D. R.; Mazzetti, P.; Santoro, M.; Nativi, S.; Domenico, B.

    2012-04-01

    Data interoperability in the study of Earth sciences is essential to performing interdisciplinary multi-scale multi-dimensional analyses (e.g. hydrologic impacts of global warming, regional urbanization, global population growth etc.). This research aims to bridge the existing gap between hydrologic and atmospheric communities both at semantic and technological levels. Within the context of hydrology, scientists are usually concerned with data organized as time series: a time series can be seen as a variable measured at a particular point in space over a period of time (e.g. the stream flow values as periodically measured by a buoy sensor in a river); atmospheric scientists instead usually organize their data as coverages: a coverage can be seen as a multidimensional data array (e.g. satellite images acquired through time). These differences make non-trivial the set up of a common framework to perform data discovery and access. A set of web services specifications and implementations is already in place in both the scientific communities to allow data discovery and access in the different domains. The CUAHSI-Hydrologic Information System (HIS) service stack lists different services types and implementations: - a metacatalog (implemented as a CSW) used to discover metadata services by distributing the query to a set of catalogs - time series catalogs (implemented as CSW) used to discover datasets published by the feature services - feature services (implemented as WFS) containing features with data access link - sensor observation services (implemented as SOS) enabling access to the stream of acquisitions Within the Unidata framework, there lies a similar service stack for atmospheric data: - the broker service (implemented as a CSW) distributes a user query to a set of heterogeneous services (i.e. catalogs services, but also inventory and access services) - the catalog service (implemented as a CSW) is able to harvest the available metadata offered by THREDDS services, and executes complex queries against the available metadata. - inventory service (implemented as a THREDDS) being able to hierarchically organize and publish a local collection of multi-dimensional arrays (e.g. NetCDF, GRIB files), as well as publish auxiliary standard services to realize the actual data access and visualization (e.g. WCS, OPeNDAP, WMS). The approach followed in this research is to build on top of the existing standards and implementations, by setting up a standard-aware interoperable framework, able to deal with the existing heterogeneity in an organic way. As a methodology, interoperability tests against real services were performed; existing problems were thus highlighted and possibly solved. The use of flexible tools, able to deal in a smart way with heterogeneity has proven to be successful, in particular experiments were carried on with both GI-cat broker and ESRI GeoPortal frameworks. GI-cat discovery broker was proven successful at implementing the CSW interface, as well as federating heterogeneous resources, such as THREDDS and WCS services published by Unidata, HydroServer, WFS and SOS services published by CUAHSI. Experiments with ESRI GeoPortal were also successful: the GeoPortal was used to deploy a web interface able to distribute searches amongst catalog implementations from both the hydrologic and the atmospheric communities, including HydroServers and GI-cat, combining results from both the domains in a seamless way.

  5. The Unlock Project: a Python-based framework for practical brain-computer interface communication "app" development.

    PubMed

    Brumberg, Jonathan S; Lorenz, Sean D; Galbraith, Byron V; Guenther, Frank H

    2012-01-01

    In this paper we present a framework for reducing the development time needed for creating applications for use in non-invasive brain-computer interfaces (BCI). Our framework is primarily focused on facilitating rapid software "app" development akin to current efforts in consumer portable computing (e.g. smart phones and tablets). This is accomplished by handling intermodule communication without direct user or developer implementation, instead relying on a core subsystem for communication of standard, internal data formats. We also provide a library of hardware interfaces for common mobile EEG platforms for immediate use in BCI applications. A use-case example is described in which a user with amyotrophic lateral sclerosis participated in an electroencephalography-based BCI protocol developed using the proposed framework. We show that our software environment is capable of running in real-time with updates occurring 50-60 times per second with limited computational overhead (5 ms system lag) while providing accurate data acquisition and signal analysis.

  6. EnergyPlus Graphical User Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-01-04

    LBNL, Infosys Technologies and Digital Alchemy are developing a free, comprehensive graphical user interface (GUI) that will enable EnergyPlus to be used more easily and effectively by building designers and other professionals, facilitating its widespread adoption. User requirements have been defined through a series of practitioner workshops. A new schematic editor for HVAC systems will be combined with different building envelope geometry generation tools and IFC-based BIM import and export. LBNL and Digital Alchemy have generated a detailed function requirements specification, which is being implemented in software by Infosys, LBNL and and Digital Alchemy. LBNL and practitioner subcontractors will developmore » a comprehensive set of templates and libraries and will perform extensive testing of the GUI before it is released in Q3 2011. It is planned to use an Open Platfom approach, in which a comprehensive set of well documented Application Programming Interfaces (API's) would be provided to facilitate both the development of third party contributions to the official, standard GUI and the development of derivative works.« less

  7. A low noise interface circuit design of micro-machined gyroscope

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Di, Xipeng; Yin, Liang; Liu, Xiaowei

    2017-07-01

    The analyses of MEMS gyroscope interface circuit on thermal noise, 1/f noise and phase noise are made in this paper. A closed-loop differential driving circuit and a low-noise differential detecting circuit based on the high frequency modulation are designed to limit the noise. The interface chip is implemented in a standard 0.5 μm CMOS process. The test results show that the resolution of sensitive capacity can reach to 6.47 × 10-20 F at the bandwidth of 60 Hz. The measuring range is ± 200°/s and the nonlinearity is 310 ppm. The output noise density is 5.8^\\circ/({{h}}\\cdot \\sqrt{{Hz}}). The angular random walk (allen-variance) is 0.092^\\circ/\\sqrt{{{h}}} and the bias instability is 2.63°/h. Project supported by the National Natural Science Foundation of China (No. 61204121), the National Hi-Tech Research and Development Program of China (No. 2013AA041107), and the Fundamental Research Funds for the Central Universities (No. HIT.NSRIF.2013040).

  8. Pairwise Force SPH Model for Real-Time Multi-Interaction Applications.

    PubMed

    Yang, Tao; Martin, Ralph R; Lin, Ming C; Chang, Jian; Hu, Shi-Min

    2017-10-01

    In this paper, we present a novel pairwise-force smoothed particle hydrodynamics (PF-SPH) model to enable simulation of various interactions at interfaces in real time. Realistic capture of interactions at interfaces is a challenging problem for SPH-based simulations, especially for scenarios involving multiple interactions at different interfaces. Our PF-SPH model can readily handle multiple types of interactions simultaneously in a single simulation; its basis is to use a larger support radius than that used in standard SPH. We adopt a novel anisotropic filtering term to further improve the performance of interaction forces. The proposed model is stable; furthermore, it avoids the particle clustering problem which commonly occurs at the free surface. We show how our model can be used to capture various interactions. We also consider the close connection between droplets and bubbles, and show how to animate bubbles rising in liquid as well as bubbles in air. Our method is versatile, physically plausible and easy-to-implement. Examples are provided to demonstrate the capabilities and effectiveness of our approach.

  9. Storage resource manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Bakken, J.; Petravick, D.

    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less

  10. IAIMS Architecture

    PubMed Central

    Hripcsak, George

    1997-01-01

    Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  11. Accessing near real-time Antarctic meteorological data through an OGC Sensor Observation Service (SOS)

    NASA Astrophysics Data System (ADS)

    Kirsch, Peter; Breen, Paul

    2013-04-01

    We wish to highlight outputs of a project conceived from a science requirement to improve discovery and access to Antarctic meteorological data in near real-time. Given that the data was distributed in both spatial and temporal domains and is to be accessed across several science disciplines, the creation of an interoperable, OGC compliant web service was deemed the most appropriate approach. We will demonstrate an implementation of the OGC SOS Interface Standard to discover, browse, and access Antarctic meteorological data-sets. A selection of programmatic (R, Perl) and web client interfaces utilizing open technologies ( e.g. jQuery, Flot, openLayers ) will be demonstrated. In addition we will show how high level abstractions can be constructed to allow the users flexible and straightforward access to SOS retrieved data.

  12. Pleiades and OCO-2: Using Supercomputing Resources to Process OCO-2 Science Data

    NASA Technical Reports Server (NTRS)

    LaHaye, Nick

    2012-01-01

    For a period of ten weeks I got the opportunity to assist in doing research for the OCO-2 project in the Science Data Operations System Team. This research involved writing a prototype interface that would work as a model for the system implemented for the project's operations. This would only be the case if when the system is tested it worked properly and up to the team's standards. This paper gives the details of the research done and its results.

  13. Protocol Standards and Implementation within the Digital Engineering Laboratory Computer Network (DELNET) Using the Universal Network Interface Device (UNID). Part 1.

    DTIC Science & Technology

    1983-12-01

    Initializes the data tables shared by both the Local and Netowrk Operating Systems. 3. Invint: Written in Assembly Language. Initializes the Input/Output...connection with an appropriate type and grade of transport service and appropriate security authentication (Ref 6:38). Data Transfer within a session...V.; Kent, S. Security in oihr Level Protocolst Anorgaches. Alternatives and Recommendations, Draft Report ICST/HLNP-81-19, Wash ingt on,,D.C.: Dept

  14. Integrating interface slicing into software engineering processes

    NASA Technical Reports Server (NTRS)

    Beck, Jon

    1993-01-01

    Interface slicing is a tool which was developed to facilitate software engineering. As previously presented, it was described in terms of its techniques and mechanisms. The integration of interface slicing into specific software engineering activities is considered by discussing a number of potential applications of interface slicing. The applications discussed specifically address the problems, issues, or concerns raised in a previous project. Because a complete interface slicer is still under development, these applications must be phrased in future tenses. Nonetheless, the interface slicing techniques which were presented can be implemented using current compiler and static analysis technology. Whether implemented as a standalone tool or as a module in an integrated development or reverse engineering environment, they require analysis no more complex than that required for current system development environments. By contrast, conventional slicing is a methodology which, while showing much promise and intuitive appeal, has yet to be fully implemented in a production language environment despite 12 years of development.

  15. AIAA spacecraft GN&C interface standards initiative: Overview

    NASA Technical Reports Server (NTRS)

    Challoner, A. Dorian

    1995-01-01

    The American Institute of Aeronautics and Astronautics (AIAA) has undertaken an important standards initiative in the area of spacecraft guidance, navigation, and control (GN&C) subsystem interfaces. The objective of this effort is to establish standards that will promote interchangeability of major GN&C components, thus enabling substantially lower spacecraft development costs. Although initiated by developers of conventional spacecraft GN&C, it is anticipated that interface standards will also be of value in reducing the development costs of micro-engineered spacecraft. The standardization targets are specifically limited to interfaces only, including information (i.e. data and signal), power, mechanical, thermal, and environmental interfaces between various GN&C components and between GN&C subsystems and other subsystems. The current emphasis is on information interfaces between various hardware elements (e.g., between star trackers and flight computers). The poster presentation will briefly describe the program, including the mechanics and schedule, and will publicize the technical products as they exist at the time of the conference. In particular, the rationale for the adoption of the AS1773 fiber-optic serial data bus and the status of data interface standards at the application layer will be presented.

  16. The PEPR GeneChip data warehouse, and implementation of a dynamic time series query tool (SGQT) with graphical interface.

    PubMed

    Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B; Almon, Richard R; DuBois, Debra C; Jusko, William J; Hoffman, Eric P

    2004-01-01

    Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp).

  17. The PEPR GeneChip data warehouse, and implementation of a dynamic time series query tool (SGQT) with graphical interface

    PubMed Central

    Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B.; Almon, Richard R.; DuBois, Debra C.; Jusko, William J.; Hoffman, Eric P.

    2004-01-01

    Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp). PMID:14681485

  18. Adaptation of the Camera Link Interface for Flight-Instrument Applications

    NASA Technical Reports Server (NTRS)

    Randall, David P.; Mahoney, John C.

    2010-01-01

    COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.

  19. Standards for the user interface - Developing a user consensus. [for Space Station Information System

    NASA Technical Reports Server (NTRS)

    Moe, Karen L.; Perkins, Dorothy C.; Szczur, Martha R.

    1987-01-01

    The user support environment (USE) which is a set of software tools for a flexible standard interactive user interface to the Space Station systems, platforms, and payloads is described in detail. Included in the USE concept are a user interface language, a run time environment and user interface management system, support tools, and standards for human interaction methods. The goals and challenges of the USE are discussed as well as a methodology based on prototype demonstrations for involving users in the process of validating the USE concepts. By prototyping the key concepts and salient features of the proposed user interface standards, the user's ability to respond is greatly enhanced.

  20. Standard Spacecraft Interfaces and IP Network Architectures: Prototyping Activities at the GSFC

    NASA Technical Reports Server (NTRS)

    Schnurr, Richard; Marquart, Jane; Lin, Michael

    2003-01-01

    Advancements in fright semiconductor technology have opened the door for IP-based networking in spacecraft architectures. The GSFC believes the same signlJicant cost savings gained using MIL-STD-1553/1773 as a standard low rate interface for spacecraft busses cun be realized for highspeed network interfaces. To that end, GSFC is developing hardware and software to support a seamless, space mission IP network based on Ethernet and MIL-STD-1553. The Ethernet network shall connect all fright computers and communications systems using interface standards defined by the CCSDS Standard Onboard InterFace (SOIF) Panel. This paper shall discuss the prototyping effort underway at GSFC and expected results.

  1. User-Centered Design Practices to Redesign a Nursing e-Chart in Line with the Nursing Process.

    PubMed

    Schachner, María B; Recondo, Francisco J; González, Zulma A; Sommer, Janine A; Stanziola, Enrique; Gassino, Fernando D; Simón, Mariana; López, Gastón E; Benítez, Sonia E

    2016-01-01

    Regarding the user-centered design (UCD) practices carried out at Hospital Italiano of Buenos Aires, nursing e-chart user interface was redesigned in order to improve records' quality of nursing process based on an adapted Virginia Henderson theoretical model and patient safety standards to fulfil Joint Commission accreditation requirements. UCD practices were applied as standardized and recommended for electronic medical records usability evaluation. Implementation of these practices yielded a series of prototypes in 5 iterative cycles of incremental improvements to achieve goals of usability which were used and perceived as satisfactory by general care nurses. Nurses' involvement allowed balance between their needs and institution requirements.

  2. U.S. experience in satellite servicing and linkage to the Space Station era

    NASA Technical Reports Server (NTRS)

    Browning, R. K.

    1986-01-01

    A history of on-orbit servicing and repair is given with emphasis placed on the Solar Maximum Repair Mission. The experience gained thus far in on-orbit servicing and the design of the Space Station's servicing capabilities impose the following requirements on users: (1) satellites must have a standard grapple for capture and a standard berthing interface, (2) Space Station safety requirements must meet to preclude damage to the Space Station or injury to the EVA crew, (3) sensitive instruments will need to implement remotely controlled protective devices to prevent damage, and (4) satellite thermal systems must be designed to maintain survival temperatures during transfer from orbit to the Space Station servicing facility.

  3. Automatic HDL firmware generation for FPGA-based reconfigurable measurement and control systems with mezzanines in FMC standard

    NASA Astrophysics Data System (ADS)

    Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard

    2013-10-01

    The paper describes a concept of automatic firmware generation for reconfigurable measurement systems, which uses FPGA devices and measurement cards in FMC standard. Following sections are described in details: automatic HDL code generation for FPGA devices, automatic communication interfaces implementation, HDL drivers for measurement cards, automatic serial connection between multiple measurement backplane boards, automatic build of memory map (address space), automatic generated firmware management. Presented solutions are required in many advanced measurement systems, like Beam Position Monitors or GEM detectors. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems. Solutions presented in this paper are based on previous publication in SPIE.

  4. A standard library for modeling satellite orbits on a microcomputer

    NASA Astrophysics Data System (ADS)

    Beutel, Kenneth L.

    1988-03-01

    Introductory students of astrodynamics and the space environment are required to have a fundamental understanding of the kinematic behavior of satellite orbits. This thesis develops a standard library that contains the basic formulas for modeling earth orbiting satellites. This library is used as a basis for implementing a satellite motion simulator that can be used to demonstrate orbital phenomena in the classroom. Surveyed are the equations of orbital elements, coordinate systems and analytic formulas, which are made into a standard method for modeling earth orbiting satellites. The standard library is written in the C programming language and is designed to be highly portable between a variety of computer environments. The simulation draws heavily on the standards established by the library to produce a graphics-based orbit simulation program written for the Apple Macintosh computer. The simulation demonstrates the utility of the standard library functions but, because of its extensive use of the Macintosh user interface, is not portable to other operating systems.

  5. National Airspace System interface management plan

    DOT National Transportation Integrated Search

    1986-01-01

    This document is intended to implement Interface Management for interfacing subsystems of the National Airspace System (NAS) and for external NAS interfaces by establishing a process which assures that: Interface requirements are agreed to by interfa...

  6. Motion Imagery and Robotics Application (MIRA)

    NASA Technical Reports Server (NTRS)

    Martinez, Lindolfo; Rich, Thomas

    2011-01-01

    Objectives include: I. Prototype a camera service leveraging the CCSDS Integrated protocol stack (MIRA/SM&C/AMS/DTN): a) CCSDS MIRA Service (New). b) Spacecraft Monitor and Control (SM&C). c) Asynchronous Messaging Service (AMS). d) Delay/Disruption Tolerant Networking (DTN). II. Additional MIRA Objectives: a) Demo of Camera Control through ISS using CCSDS protocol stack (Berlin, May 2011). b) Verify that the CCSDS standards stack can provide end-to-end space camera services across ground and space environments. c) Test interoperability of various CCSDS protocol standards. d) Identify overlaps in the design and implementations of the CCSDS protocol standards. e) Identify software incompatibilities in the CCSDS stack interfaces. f) Provide redlines to the SM&C, AMS, and DTN working groups. d) Enable the CCSDS MIRA service for potential use in ISS Kibo camera commanding. e) Assist in long-term evolution of this entire group of CCSDS standards to TRL 6 or greater.

  7. E-SMART system for in-situ detection of environmental contaminants. Quarterly technical progress report, April--June 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-08-01

    General Atomics (GA) leads a team of industrial, academic, and government organizations in the development of the Environmental Systems Management, Analysis and Reporting neTwork (E-SMART) for the Defense Advanced Research Project Agency (DARPA), by way of this Technology Reinvestment Project (TRP). E-SMART defines a standard by which networks of smart sensing, sampling, and control devices can interoperate. E-SMART is intended to be an open standard, available to any equipment manufacturer. The user will be provided a standard platform on which a site-specific monitoring plan can be implemented using sensors and actuators from various manufacturers and upgraded as new monitoring devicesmore » become commercially available. This project will further develop and advance the E-SMART standardized network protocol to include new sensors, sampling systems, and graphical user interfaces.« less

  8. Efficient Decoding With Steady-State Kalman Filter in Neural Interface Systems

    PubMed Central

    Malik, Wasim Q.; Truccolo, Wilson; Brown, Emery N.; Hochberg, Leigh R.

    2011-01-01

    The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5 ± 0.5 s (mean ± s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25 ± 3 single units by a factor of 7.0 ± 0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems. PMID:21078582

  9. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less

  10. Wireless multipoint communication for optical sensors in the industrial environment using the new Bluetooth standard

    NASA Astrophysics Data System (ADS)

    Hussmann, Stephan; Lau, Wing Y.; Chu, Terry; Grothof, Markus

    2003-07-01

    Traditionally, the measuring or monitoring system of manufacturing industries uses sensors, computers and screens for their quality control (Q.C.). The acquired information is fed back to the control room by wires, which - for obvious reason - are not suitable in many environments. This paper describes a method to solve this problem by employing the new Bluetooth technology to set up a complete new system, where a total wireless solution is made feasible. This new Q.C. system allows several line scan cameras to be connected at once to a graphical user interface (GUI) that can monitor the production process. There are many Bluetooth devices available on the market such as cell-phones, headsets, printers, PDA etc. However, the detailed application is a novel implementation in the industrial Q.C. area. This paper will contain more details about the Bluetooth standard and why it is used (nework topologies, host controller interface, data rates, etc.), the Bluetooth implemetation in the microcontroller of the line scan camera, and the GUI and its features.

  11. An informatics model for tissue banks--lessons learned from the Cooperative Prostate Cancer Tissue Resource.

    PubMed

    Patel, Ashokkumar A; Gilbertson, John R; Parwani, Anil V; Dhir, Rajiv; Datta, Milton W; Gupta, Rajnish; Berman, Jules J; Melamed, Jonathan; Kajdacsy-Balla, Andre; Orenstein, Jan; Becich, Michael J

    2006-05-05

    Advances in molecular biology and growing requirements from biomarker validation studies have generated a need for tissue banks to provide quality-controlled tissue samples with standardized clinical annotation. The NCI Cooperative Prostate Cancer Tissue Resource (CPCTR) is a distributed tissue bank that comprises four academic centers and provides thousands of clinically annotated prostate cancer specimens to researchers. Here we describe the CPCTR information management system architecture, common data element (CDE) development, query interfaces, data curation, and quality control. Data managers review the medical records to collect and continuously update information for the 145 clinical, pathological and inventorial CDEs that the Resource maintains for each case. An Access-based data entry tool provides de-identification and a standard communication mechanism between each group and a central CPCTR database. Standardized automated quality control audits have been implemented. Centrally, an Oracle database has web interfaces allowing multiple user-types, including the general public, to mine de-identified information from all of the sites with three levels of specificity and granularity as well as to request tissues through a formal letter of intent. Since July 2003, CPCTR has offered over 6,000 cases (38,000 blocks) of highly characterized prostate cancer biospecimens, including several tissue microarrays (TMA). The Resource developed a website with interfaces for the general public as well as researchers and internal members. These user groups have utilized the web-tools for public query of summary data on the cases that were available, to prepare requests, and to receive tissues. As of December 2005, the Resource received over 130 tissue requests, of which 45 have been reviewed, approved and filled. Additionally, the Resource implemented the TMA Data Exchange Specification in its TMA program and created a computer program for calculating PSA recurrence. Building a biorepository infrastructure that meets today's research needs involves time and input of many individuals from diverse disciplines. The CPCTR can provide large volumes of carefully annotated prostate tissue for research initiatives such as Specialized Programs of Research Excellence (SPOREs) and for biomarker validation studies and its experience can help development of collaborative, large scale, virtual tissue banks in other organ systems.

  12. Standardized Modular Power Interfaces for Future Space Explorations Missions

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard

    2015-01-01

    Earlier studies show that future human explorations missions are composed of multi-vehicle assemblies with interconnected electric power systems. Some vehicles are often intended to serve as flexible multi-purpose or multi-mission platforms. This drives the need for power architectures that can be reconfigured to support this level of flexibility. Power system developmental costs can be reduced, program wide, by utilizing a common set of modular building blocks. Further, there are mission operational and logistics cost benefits of using a common set of modular spares. These benefits are the goals of the Advanced Exploration Systems (AES) Modular Power System (AMPS) project. A common set of modular blocks requires a substantial level of standardization in terms of the Electrical, Data System, and Mechanical interfaces. The AMPS project is developing a set of proposed interface standards that will provide useful guidance for modular hardware developers but not needlessly constrain technology options, or limit future growth in capability. In 2015 the AMPS project focused on standardizing the interfaces between the elements of spacecraft power distribution and energy storage. The development of the modular power standard starts with establishing mission assumptions and ground rules to define design application space. The standards are defined in terms of AMPS objectives including Commonality, Reliability-Availability, Flexibility-Configurability and Supportability-Reusability. The proposed standards are aimed at assembly and sub-assembly level building blocks. AMPS plans to adopt existing standards for spacecraft command and data, software, network interfaces, and electrical power interfaces where applicable. Other standards including structural encapsulation, heat transfer, and fluid transfer, are governed by launch and spacecraft environments and bound by practical limitations of weight and volume. Developing these mechanical interface standards is more difficult but an essential part of defining physical building blocks of modular power. This presentation describes the AMPS projects progress towards standardized modular power interfaces.

  13. The reliable multicast protocol application programming interface

    NASA Technical Reports Server (NTRS)

    Montgomery , Todd; Whetten, Brian

    1995-01-01

    The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.

  14. Spacelab payload accommodation handbook. Appendix B: Structure interface definition module

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The mechanical interfaces between Spacelab and its payload are defined. The envelopes available for mounting payload hardware are specified together with the standard structural attachment interfaces. Overall load capabilities and the local load capabilities for individual attachment interfaces are defined for the standard mounting locations. The mechanical environment is defined and the mechanical interfaces between the payload and the EPDS, CDMS and ECS are included.

  15. TAE Plus: Transportable Applications Environment Plus tools for building graphic-oriented applications

    NASA Technical Reports Server (NTRS)

    Szczur, Martha R.

    1989-01-01

    The Transportable Applications Environment Plus (TAE Plus), developed by NASA's Goddard Space Flight Center, is a portable User Interface Management System (UIMS), which provides an intuitive WYSIWYG WorkBench for prototyping and designing an application's user interface, integrated with tools for efficiently implementing the designed user interface and effective management of the user interface during an application's active domain. During the development of TAE Plus, many design and implementation decisions were based on the state-of-the-art within graphics workstations, windowing system and object-oriented programming languages. Some of the problems and issues experienced during implementation are discussed. A description of the next development steps planned for TAE Plus is also given.

  16. Applying Sensor Web Technology to Marine Sensor Data

    NASA Astrophysics Data System (ADS)

    Jirka, Simon; del Rio, Joaquin; Mihai Toma, Daniel; Nüst, Daniel; Stasch, Christoph; Delory, Eric

    2015-04-01

    In this contribution we present two activities illustrating how Sensor Web technology helps to enable a flexible and interoperable sharing of marine observation data based on standards. An important foundation is the Sensor Web Architecture developed by the European FP7 project NeXOS (Next generation Low-Cost Multifunctional Web Enabled Ocean Sensor Systems Empowering Marine, Maritime and Fisheries Management). This architecture relies on the Open Geospatial Consortium's (OGC) Sensor Web Enablement (SWE) framework. It is an exemplary solution for facilitating the interoperable exchange of marine observation data within and between (research) organisations. The architecture addresses a series of functional and non-functional requirements which are fulfilled through different types of OGC SWE components. The diverse functionalities offered by the NeXOS Sensor Web architecture are shown in the following overview: - Pull-based observation data download: This is achieved through the OGC Sensor Observation Service (SOS) 2.0 interface standard. - Push-based delivery of observation data to allow users the subscription to new measurements that are relevant for them: For this purpose there are currently several specification activities under evaluation (e.g. OGC Sensor Event Service, OGC Publish/Subscribe Standards Working Group). - (Web-based) visualisation of marine observation data: Implemented through SOS client applications. - Configuration and controlling of sensor devices: This is ensured through the OGC Sensor Planning Service 2.0 interface. - Bridging between sensors/data loggers and Sensor Web components: For this purpose several components such as the "Smart Electronic Interface for Sensor Interoperability" (SEISI) concept are developed; this is complemented by a more lightweight SOS extension (e.g. based on the W3C Efficient XML Interchange (EXI) format). To further advance this architecture, there is on-going work to develop dedicated profiles of selected OGC SWE specifications that provide stricter guidance how these standards shall be applied to marine data (e.g. SensorML 2.0 profiles stating which metadata elements are mandatory building upon the ESONET Sensor Registry developments, etc.). Within the NeXOS project the presented architecture is implemented as a set of open source components. These implementations can be re-used by all interested scientists and data providers needing tools for publishing or consuming oceanographic sensor data. In further projects such as the European project FixO3 (Fixed-point Open Ocean Observatories), these software development activities are complemented with additional efforts to provide guidance how Sensor Web technology can be applied in an efficient manner. This way, not only software components are made available but also documentation and information resources that help to understand which types of Sensor Web deployments are best suited to fulfil different types of user requirements.

  17. Describing different brain computer interface systems through a unique model: a UML implementation.

    PubMed

    Quitadamo, Lucia Rita; Marciani, Maria Grazia; Cardarilli, Gian Carlo; Bianchi, Luigi

    2008-01-01

    All the protocols currently implemented in brain computer interface (BCI) experiments are characterized by different structural and temporal entities. Moreover, due to the lack of a unique descriptive model for BCI systems, there is not a standard way to define the structure and the timing of a BCI experimental session among different research groups and there is also great discordance on the meaning of the most common terms dealing with BCI, such as trial, run and session. The aim of this paper is to provide a unified modeling language (UML) implementation of BCI systems through a unique dynamic model which is able to describe the main protocols defined in the literature (P300, mu-rhythms, SCP, SSVEP, fMRI) and demonstrates to be reasonable and adjustable according to different requirements. This model includes a set of definitions of the typical entities encountered in a BCI, diagrams which explain the structural correlations among them and a detailed description of the timing of a trial. This last represents an innovation with respect to the models already proposed in the literature. The UML documentation and the possibility of adapting this model to the different BCI systems built to date, make it a basis for the implementation of new systems and a mean for the unification and dissemination of resources. The model with all the diagrams and definitions reported in the paper are the core of the body language framework, a free set of routines and tools for the implementation, optimization and delivery of cross-platform BCI systems.

  18. Application Program Interface for the Orion Aerodynamics Database

    NASA Technical Reports Server (NTRS)

    Robinson, Philip E.; Thompson, James

    2013-01-01

    The Application Programming Interface (API) for the Crew Exploration Vehicle (CEV) Aerodynamic Database has been developed to provide the developers of software an easily implemented, fully self-contained method of accessing the CEV Aerodynamic Database for use in their analysis and simulation tools. The API is programmed in C and provides a series of functions to interact with the database, such as initialization, selecting various options, and calculating the aerodynamic data. No special functions (file read/write, table lookup) are required on the host system other than those included with a standard ANSI C installation. It reads one or more files of aero data tables. Previous releases of aerodynamic databases for space vehicles have only included data tables and a document of the algorithm and equations to combine them for the total aerodynamic forces and moments. This process required each software tool to have a unique implementation of the database code. Errors or omissions in the documentation, or errors in the implementation, led to a lengthy and burdensome process of having to debug each instance of the code. Additionally, input file formats differ for each space vehicle simulation tool, requiring the aero database tables to be reformatted to meet the tool s input file structure requirements. Finally, the capabilities for built-in table lookup routines vary for each simulation tool. Implementation of a new database may require an update to and verification of the table lookup routines. This may be required if the number of dimensions of a data table exceeds the capability of the simulation tools built-in lookup routines. A single software solution was created to provide an aerodynamics software model that could be integrated into other simulation and analysis tools. The highly complex Orion aerodynamics model can then be quickly included in a wide variety of tools. The API code is written in ANSI C for ease of portability to a wide variety of systems. The input data files are in standard formatted ASCII, also for improved portability. The API contains its own implementation of multidimensional table reading and lookup routines. The same aerodynamics input file can be used without modification on all implementations. The turnaround time from aerodynamics model release to a working implementation is significantly reduced

  19. Active Low Intrusion Hybrid Monitor for Wireless Sensor Networks

    PubMed Central

    Navia, Marlon; Campelo, Jose C.; Bonastre, Alberto; Ors, Rafael; Capella, Juan V.; Serrano, Juan J.

    2015-01-01

    Several systems have been proposed to monitor wireless sensor networks (WSN). These systems may be active (causing a high degree of intrusion) or passive (low observability inside the nodes). This paper presents the implementation of an active hybrid (hardware and software) monitor with low intrusion. It is based on the addition to the sensor node of a monitor node (hardware part) which, through a standard interface, is able to receive the monitoring information sent by a piece of software executed in the sensor node. The intrusion on time, code, and energy caused in the sensor nodes by the monitor is evaluated as a function of data size and the interface used. Then different interfaces, commonly available in sensor nodes, are evaluated: serial transmission (USART), serial peripheral interface (SPI), and parallel. The proposed hybrid monitor provides highly detailed information, barely disturbed by the measurement tool (interference), about the behavior of the WSN that may be used to evaluate many properties such as performance, dependability, security, etc. Monitor nodes are self-powered and may be removed after the monitoring campaign to be reused in other campaigns and/or WSNs. No other hardware-independent monitoring platforms with such low interference have been found in the literature. PMID:26393604

  20. Orbiter middeck/payload standard interfaces control document

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The interfaces which shall be provided by the baseline shuttle mid-deck for payload use within the mid-deck area are defined, as well as all constraints which shall be observed by all the users of the defined interfaces. Commonality was established with respect to analytical approaches, analytical models, technical data and definitions for integrated analyses by all the interfacing parties. Any payload interfaces that are out of scope with the standard interfaces defined shall be defined in a Payload Unique Interface Control Document (ICD) for a given payload. Each Payload Unique ICD will have comparable paragraphs to this ICD and will have a corresponding notation of A, for applicable; N/A, for not applicable; N, for note added for explanation; and E, for exception. On any flight, the STS reserves the right to assign locations to both payloads mounted on an adapter plate(s) and payloads stored within standard lockers. Specific locations requests and/or requirements exceeding standard mid-deck payload requirements may result in a reduction in manifesting opportunities.

  1. Fast data transmission in dynamic data acquisition system for plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Byszuk, Adrian; Poźniak, Krzysztof; Zabołotny, Wojciech M.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Cieszewski, Radosław; Juszczyk, Bartłomiej; Kolasiński, Piotr; Zienkiewicz, Paweł; Chernyshova, Maryna; Czarski, Tomasz

    2014-11-01

    This paper describes architecture of a new data acquisition system (DAQ) targeted mainly at plasma diagnostic experiments. Modular architecture, in combination with selected hardware components, allows for straightforward reconfiguration of the whole system, both offline and online. Main emphasis will be put into the implementation of data transmission subsystem in said system. One of the biggest advantages of described system is modular architecture with well defined boundaries between main components: analog frontend (AFE), digital backplane and acquisition/control software. Usage of a FPGA chips allows for a high flexibility in design of analog frontends, including ADC <--> FPGA interface. Data transmission between backplane boards and user software was accomplished with the use of industry-standard PCI Express (PCIe) technology. PCIe implementation includes both FPGA firmware and Linux device driver. High flexibility of PCIe connections was accomplished due to use of configurable PCIe switch. Whenever it's possible, described DAQ system tries to make use of standard off-the-shelf (OTF) components, including typical x86 CPU & motherboard (acting as PCIe controller) and cabling.

  2. Simultaneous control of multiple instruments at the Advanced Technology Solar Telescope

    NASA Astrophysics Data System (ADS)

    Johansson, Erik M.; Goodrich, Bret

    2012-09-01

    The Advanced Technology Solar Telescope (ATST) is a 4-meter solar observatory under construction at Haleakala, Hawaii. The simultaneous use of multiple instruments is one of the unique capabilities that makes the ATST a premier ground based solar observatory. Control of the instrument suite is accomplished by the Instrument Control System (ICS), a layer of software between the Observatory Control System (OCS) and the instruments. The ICS presents a single narrow interface to the OCS and provides a standard interface for the instruments to be controlled. It is built upon the ATST Common Services Framework (CSF), an infrastructure for the implementation of a distributed control system. The ICS responds to OCS commands and events, coordinating and distributing them to the various instruments while monitoring their progress and reporting the status back to the OCS. The ICS requires no specific knowledge about the instruments. All information about the instruments used in an experiment is passed by the OCS to the ICS, which extracts and forwards the parameters to the appropriate instrument controllers. The instruments participating in an experiment define the active instrument set. A subset of those instruments must complete their observing activities in order for the experiment to be considered complete and are referred to as the must-complete instrument set. In addition, instruments may participate in eavesdrop mode, outside of the control of the ICS. All instrument controllers use the same standard narrow interface, which allows new instruments to be added without having to modify the interface or any existing instrument controllers.

  3. PCIE interface design for high-speed image storage system based on SSD

    NASA Astrophysics Data System (ADS)

    Wang, Shiming

    2015-02-01

    This paper proposes and implements a standard interface of miniaturized high-speed image storage system, which combines PowerPC with FPGA and utilizes PCIE bus as the high speed switching channel. Attached to the PowerPC, mSATA interface SSD(Solid State Drive) realizes RAID3 array storage. At the same time, a high-speed real-time image compression patent IP core also can be embedded in FPGA, which is in the leading domestic level with compression rate and image quality, making that the system can record higher image data rate or achieve longer recording time. The notebook memory card buckle type design is used in the mSATA interface SSD, which make it possible to complete the replacement in 5 seconds just using single hand, thus the total length of repeated recordings is increased. MSI (Message Signaled Interrupts) interruption guarantees the stability and reliability of continuous DMA transmission. Furthermore, only through the gigabit network, the remote display, control and upload to backup function can be realized. According to an optional 25 frame/s or 30 frame/s, upload speeds can be up to more than 84 MB/s. Compared with the existing FLASH array high-speed memory systems, it has higher degree of modularity, better stability and higher efficiency on development, maintenance and upgrading. Its data access rate is up to 300MB/s, realizing the high speed image storage system miniaturization, standardization and modularization, thus it is fit for image acquisition, storage and real-time transmission to server on mobile equipment.

  4. An Analysis for an Internet Grid to Support Space Based Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Currently, and in the past, dedicated communication circuits and "network services" with very stringent performance requirements have been used to support manned and unmanned mission critical ground operations at GSFC, JSC, MSFC, KSC and other NASA facilities. Because of the evolution of network technology, it is time to investigate other approaches to providing mission services for space ground and flight operations. In various scientific disciplines, effort is under way to develop network/komputing grids. These grids consisting of networks and computing equipment are enabling lower cost science. Specifically, earthquake research is headed in this direction. With a standard for network and computing interfaces using a grid, a researcher would not be required to develop and engineer NASA/DoD specific interfaces with the attendant increased cost. Use of the Internet Protocol (IP), CCSDS packet spec, and reed-solomon for satellite error correction etc. can be adopted/standardized to provide these interfaces. Generally most interfaces are developed at least to some degree end to end. This study would investigate the feasibility of using existing standards and protocols necessary to implement a SpaceOps Grid. New interface definitions or adoption/modification of existing ones for the various space operational services is required for voice both space based and ground, video, telemetry, commanding and planning may play a role to some undefined level. Security will be a separate focus in the study since security is such a large issue in using public networks. This SpaceOps Grid would be transparent to users. It would be anagulous to the Ethernet protocol's ease of use in that a researcher would plug in their experiment or instrument at one end and would be connected to the appropriate host or server without further intervention. Free flyers would be in this category as well. They would be launched and would transmit without any further intervention with the researcher or ground ops personnel. The payback in developing these new approaches in support of manned and unmanned operations is lower cost and will enable direct participation by more people in organizations and educational institutions in space based science. By lowering the high cost of space based operations and networking, more resource will be available to the science community for science. With a specific grid in place, experiment development and operations would be much less costly by using standardized network interfaces. Because of the extensive connectivity on a global basis, significant numbers of people would participate in science who otherwise would not be able to participate.

  5. Digital hand atlas for web-based bone age assessment: system design and implementation

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente

    2000-04-01

    A frequently used assessment method of skeletal age is atlas matching by a radiological examination of a hand image against a small set of Greulich-Pyle patterns of normal standards. The method however can lead to significant deviation in age assessment, due to a variety of observers with different levels of training. The Greulich-Pyle atlas based on middle upper class white populations in the 1950s, is also not fully applicable for children of today, especially regarding the standard development in other racial groups. In this paper, we present our system design and initial implementation of a digital hand atlas and computer-aided diagnostic (CAD) system for Web-based bone age assessment. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. The system consists of a hand atlas database, a CAD module and a Java-based Web user interface. The atlas database is based on a large set of clinically normal hand images of diverse ethnic groups. The Java-based Web user interface allows users to interact with the hand image database form browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, is then extracted and compared with patterns from the atlas database to assess the bone age.

  6. Implementing Ethernet Services on the Payload Executive Processor (PEP)

    NASA Technical Reports Server (NTRS)

    Pruett, David; Guyette, Greg

    2016-01-01

    The Ethernet interface is more common and easier interface to implement for payload developers already familiar with Ethernet protocol in their labs. The Ethernet interface allows for a more distributed payload architecture. Connections can be placed in locations not serviced by the PEP 1553 bus. The Ethernet interface provides a new access port into the PEP so as to use the already existing services. Initial capability will include a subset of services with a plan to expand services later.

  7. Standardized Solution for Management Controller for MTCA.4

    NASA Astrophysics Data System (ADS)

    Makowski, D.; Fenner, M.; Ludwig, F.; Mavrič, U.; Mielczarek, A.; Napieralski, A.; Perek, P.; Schlarb, H.

    2015-06-01

    The Micro Telecommunications Computing Architecture (MTCA) standard is a modern platform that is gaining popularity in the area of High Energy Physics (HEP) experiments. The standard provides extensive management, monitoring and diagnostics functionalities. The hardware control and monitoring is based on the Intelligent Platform Management Interface (IPMI), that was initially developed for supervision of complex computers operation. The original IPMI specification was extended to support functions required by the MTCA specification. The Module Management Controller (MMC) is required on each Advanced Mezzanine Card (AMC) installed in MTCA chassis. The Rear Transition Modules (RTMs) have to be equipped with RTM Management Controllers (RMCs) which is required by the MTCA.4 subsidiary specification. The commercially available implementations of MMC and RMC are expensive and do not provide the complete functionality that is required by specific HEP applications. Therefore, many research centers and commercial companies work on their own implementation of AMC and RTM controllers. The available implementations suffer because of lack of common approach and interoperability problems. Since both Lodz University of Technology (TUL) and Deutsches Elektronen-Synchrotron (DESY) have long-term experience in developing ATCA and MTCA hardware, the authors decided to develop a unified solution of management controller fully compliant with AMC and MTCA.4 standards. The MMC v1.00 solution is dedicated for management of AMC and RTM modules. The MMC v1.00 is based on Atmel ATxmega MCUs and can be fully customized by the user or used as a drop-in-module without any modifications. The paper discusses the functionality of the MMC v1.00 solution. The implementation was verified with developed evaluation kits for AMC and RTM cards.

  8. Kernel-based Linux emulation for Plan 9.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9.more » In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.« less

  9. A risk management approach to CAIS development

    NASA Technical Reports Server (NTRS)

    Hart, Hal; Kerner, Judy; Alden, Tony; Belz, Frank; Tadman, Frank

    1986-01-01

    The proposed DoD standard Common APSE Interface Set (CAIS) was developed as a framework set of interfaces that will support the transportability and interoperability of tools in the support environments of the future. While the current CAIS version is a promising start toward fulfilling those goals and current prototypes provide adequate testbeds for investigations in support of completing specifications for a full CAIS, there are many reasons why the proposed CAIS might fail to become a usable product and the foundation of next-generation (1990'S) project support environments such as NASA's Space Station software support environment. The most critical threats to the viability and acceptance of the CAIS include performance issues (especially in piggybacked implementations), transportability, and security requirements. To make the situation worse, the solution to some of these threats appears to be at conflict with the solutions to others.

  10. GlycoRDF: an ontology to standardize glycomics data in RDF

    PubMed Central

    Ranzinger, Rene; Aoki-Kinoshita, Kiyoko F.; Campbell, Matthew P.; Kawano, Shin; Lütteke, Thomas; Okuda, Shujiro; Shinmachi, Daisuke; Shikanai, Toshihide; Sawaki, Hiromichi; Toukach, Philip; Matsubara, Masaaki; Yamada, Issaku; Narimatsu, Hisashi

    2015-01-01

    Motivation: Over the last decades several glycomics-based bioinformatics resources and databases have been created and released to the public. Unfortunately, there is no common standard in the representation of the stored information or a common machine-readable interface allowing bioinformatics groups to easily extract and cross-reference the stored information. Results: An international group of bioinformatics experts in the field of glycomics have worked together to create a standard Resource Description Framework (RDF) representation for glycomics data, focused on glycan sequences and related biological source, publications and experimental data. This RDF standard is defined by the GlycoRDF ontology and will be used by database providers to generate common machine-readable exports of the data stored in their databases. Availability and implementation: The ontology, supporting documentation and source code used by database providers to generate standardized RDF are available online (http://www.glycoinfo.org/GlycoRDF/). Contact: rene@ccrc.uga.edu or kkiyoko@soka.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25388145

  11. A Distributed Laboratory for Event-Driven Coastal Prediction and Hazard Planning

    NASA Astrophysics Data System (ADS)

    Bogden, P.; Allen, G.; MacLaren, J.; Creager, G. J.; Flournoy, L.; Sheng, Y. P.; Graber, H.; Graves, S.; Conover, H.; Luettich, R.; Perrie, W.; Ramakrishnan, L.; Reed, D. A.; Wang, H. V.

    2006-12-01

    The 2005 Atlantic hurricane season was the most active in recorded history. Collectively, 2005 hurricanes caused more than 2,280 deaths and record damages of over 100 billion dollars. Of the storms that made landfall, Dennis, Emily, Katrina, Rita, and Wilma caused most of the destruction. Accurate predictions of storm-driven surge, wave height, and inundation can save lives and help keep recovery costs down, provided the information gets to emergency response managers in time. The information must be available well in advance of landfall so that responders can weigh the costs of unnecessary evacuation against the costs of inadequate preparation. The SURA Coastal Ocean Observing and Prediction (SCOOP) Program is a multi-institution collaboration implementing a modular, distributed service-oriented architecture for real time prediction and visualization of the impacts of extreme atmospheric events. The modular infrastructure enables real-time prediction of multi- scale, multi-model, dynamic, data-driven applications. SURA institutions are working together to create a virtual and distributed laboratory integrating coastal models, simulation data, and observations with computational resources and high speed networks. The loosely coupled architecture allows teams of computer and coastal scientists at multiple institutions to innovate complex system components that are interconnected with relatively stable interfaces. The operational system standardizes at the interface level to enable substantial innovation by complementary communities of coastal and computer scientists. This architectural philosophy solves a long-standing problem associated with the transition from research to operations. The SCOOP Program thereby implements a prototype laboratory consistent with the vision of a national, multi-agency initiative called the Integrated Ocean Observing System (IOOS). Several service- oriented components of the SCOOP enterprise architecture have already been designed and implemented, including data archive and transport services, metadata registry and retrieval (catalog), resource management, and portal interfaces. SCOOP partners are integrating these at the service level and implementing reconfigurable workflows for several kinds of user scenarios, and are working with resource providers to prototype new policies and technologies for on-demand computing.

  12. Enabling Cross-Platform Clinical Decision Support through Web-Based Decision Support in Commercial Electronic Health Record Systems: Proposal and Evaluation of Initial Prototype Implementations

    PubMed Central

    Zhang, Mingyuan; Velasco, Ferdinand T.; Musser, R. Clayton; Kawamoto, Kensaku

    2013-01-01

    Enabling clinical decision support (CDS) across multiple electronic health record (EHR) systems has been a desired but largely unattained aim of clinical informatics, especially in commercial EHR systems. A potential opportunity for enabling such scalable CDS is to leverage vendor-supported, Web-based CDS development platforms along with vendor-supported application programming interfaces (APIs). Here, we propose a potential staged approach for enabling such scalable CDS, starting with the use of custom EHR APIs and moving towards standardized EHR APIs to facilitate interoperability. We analyzed three commercial EHR systems for their capabilities to support the proposed approach, and we implemented prototypes in all three systems. Based on these analyses and prototype implementations, we conclude that the approach proposed is feasible, already supported by several major commercial EHR vendors, and potentially capable of enabling cross-platform CDS at scale. PMID:24551426

  13. Interface Provides Standard-Bus Communication

    NASA Technical Reports Server (NTRS)

    Culliton, William G.

    1995-01-01

    Microprocessor-controlled interface (IEEE-488/LVABI) incorporates service-request and direct-memory-access features. Is circuit card enabling digital communication between system called "laser auto-covariance buffer interface" (LVABI) and compatible personal computer via general-purpose interface bus (GPIB) conforming to Institute for Electrical and Electronics Engineers (IEEE) Standard 488. Interface serves as second interface enabling first interface to exploit advantages of GPIB, via utility software written specifically for GPIB. Advantages include compatibility with multitasking and support of communication among multiple computers. Basic concept also applied in designing interfaces for circuits other than LVABI for unidirectional or bidirectional handling of parallel data up to 16 bits wide.

  14. Using Standardized Lexicons for Report Template Validation with LexMap, a Web-based Application.

    PubMed

    Hostetter, Jason; Wang, Kenneth; Siegel, Eliot; Durack, Jeremy; Morrison, James J

    2015-06-01

    An enormous amount of data exists in unstructured diagnostic and interventional radiology reports. Free text or non-standardized terminologies limit the ability to parse, extract, and analyze these report data elements. Medical lexicons and ontologies contain standardized terms for relevant concepts including disease entities, radiographic technique, and findings. The use of standardized terms offers the potential to improve reporting consistency and facilitate computer analysis. The purpose of this project was to implement an interface to aid in the creation of standards-compliant reporting templates for use in interventional radiology. Non-standardized procedure report text was analyzed and referenced to RadLex, SNOMED-CT, and LOINC. Using JavaScript, a web application was developed which determined whether exact terms or synonyms in reports existed within these three reference resources. The NCBO BioPortal Annotator web service was used to map terms, and output from this application was used to create an interactive annotated version of the original report. The application was successfully used to analyze and modify five distinct reports for the Society of Interventional Radiology's standardized reporting project.

  15. Radiation-Hard SpaceWire/Gigabit Ethernet-Compatible Transponder

    NASA Technical Reports Server (NTRS)

    Katzman, Vladimir

    2012-01-01

    A radiation-hard transponder was developed utilizing submicron/nanotechnology from IBM. The device consumes low power and has a low fabrication cost. This device utilizes a Plug-and-Play concept, and can be integrated into intra-satellite networks, supporting SpaceWire and Gigabit Ethernet I/O. A space-qualified, 100-pin package also was developed, allowing space-qualified (class K) transponders to be delivered within a six-month time frame. The novel, optical, radiation-tolerant transponder was implemented as a standalone board, containing the transponder ASIC (application specific integrated circuit) and optical module, with an FPGA (field-programmable gate array) friendly parallel interface. It features improved radiation tolerance; high-data-rate, low-power consumption; and advanced functionality. The transponder utilizes a patented current mode logic library of radiation-hardened-by-architecture cells. The transponder was developed, fabricated, and radhard tested up to 1 MRad. It was fabricated using 90-nm CMOS (complementary metal oxide semiconductor) 9 SF process from IBM, and incorporates full BIT circuitry, allowing a loop back test. The low-speed parallel LVCMOS (lowvoltage complementary metal oxide semiconductor) bus is compatible with Actel FPGA. The output LVDS (low-voltage differential signaling) interface operates up to 1.5 Gb/s. Built-in CDR (clock-data recovery) circuitry provides robust synchronization and incorporates two alarm signals such as synch loss and signal loss. The ultra-linear peak detector scheme allows on-line control of the amplitude of the input signal. Power consumption is less than 300 mW. The developed transponder with a 1.25 Gb/s serial data rate incorporates a 10-to-1 serializer with an internal clock multiplication unit and a 10-1 deserializer with internal clock and data recovery block, which can operate with 8B10B encoded signals. Three loop-back test modes are provided to facilitate the built-in-test functionality. The design is based on a proprietary library of differential current switching logic cells implemented in the standard 90-nm CMOS 9SF technology from IBM. The proprietary low-power LVDS physical interface is fully compatible with the SpaceWire standard, and can be directly connected to the SFP MSA (small form factor pluggable Multiple Source Agreement) optical transponder. The low-speed parallel interfaces are fully compatible with the standard 1.8 V CMOS input/output devices. The utilized proprietary annular CMOS layout structures provide TID tolerance above 1.2 MRad. The complete chip consumes less than 150 mW of power from a single 1.8-V positive supply source.

  16. A methodology for Manufacturing Execution Systems (MES) implementation

    NASA Astrophysics Data System (ADS)

    Govindaraju, Rajesri; Putra, Krisna

    2016-02-01

    Manufacturing execution system is information systems (IS) application that bridges the gap between IS at the top level, namely enterprise resource planning (ERP), and IS at the lower levels, namely the automation systems. MES provides a media for optimizing the manufacturing process as a whole in a real time basis. By the use of MES in combination with the implementation of ERP and other automation systems, a manufacturing company is expected to have high competitiveness. In implementing MES, functional integration -making all the components of the manufacturing system able to work well together, is the most difficult challenge. For this, there has been an industry standard that specifies the sub-systems of a manufacturing execution systems and defines the boundaries between ERP systems, MES, and other automation systems. The standard is known as ISA-95. Although the advantages from the use of MES have been stated in some studies, not much research being done on how to implement MES effectively. The purpose of this study is to develop a methodology describing how MES implementation project should be managed, utilising the support of ISA- 95 reference model in the system development process. A proposed methodology was developed based on a general IS development methodology. The developed methodology were then revisited based on the understanding about the specific charateristics of MES implementation project found in an Indonesian steel manufacturing company implementation case. The case study highlighted the importance of applying an effective requirement elicitation method during innitial system assessment process, managing system interfaces and labor division in the design process, and performing a pilot deployment before putting the whole system into operation.

  17. EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2013-04-01

    EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be constructed to calculate, for example, the thickness between two surfaces in a 3D model or the depth from ground surface to the top of a particular geologic unit. In the first version of the service a simple interface showing some example queries has been implemented in order to show the potential of the technologies. The project aims to develop the services available in light of user feedback, both in terms of the data available, the functionality and the interface. User feedback on the services guides the software and standards development aspects of the project, leading to enhanced versions of the software which will be implemented in upgraded versions of the services during the lifetime of the project.

  18. Modeling development of converter topologies and control for BTB voltage source converters. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, L.

    1998-08-01

    This report presents the results of an investigation into the merits of using a back-to-back voltage source converter (BTB-VSC) as an alternative to a conventional back-to-back high voltage DC link (HVDC). The report presents the basic benefits of the new technology along with the basic control blocks needed to implement the design. The report also describes a model of the BTB-VSC implemented in EMTDC{trademark} and discusses the use of the model. Simulation results, showing how the model responds to various control actions and system disturbances, are presented. This modeling work developed a detailed EMTDC{trademark} model using the appropriate converter technologymore » and magnetic interface configuration. Various possible converter and magnetic interface configurations were examined and the most promising configuration was used for the model. The chosen configuration minimizes the number of high voltage transformers needed and minimizes the complexity non-standard interfacing transformers. There is no need for transformers with phase shifts other than zero or thirty degrees (wye-wye or wye-delta). The only non-standard feature is the necessity of bringing the neutral side of the high voltage winding on the wye-wye unit out through bushings and to insulate the wye-wye transformer for the system voltage which is twice the transformer winding voltage. The developed EMTDC{trademark} model was used to demonstrate the possibility of achieving independent control of the real power transmitted and the voltages at the AC terminals. The model also demonstrates the ability to interconnect weak AC systems without the necessity of additional voltage support equipment as is the case with the conventional back-to-back DC interconnection. The model has been shown to work with short circuit ratios less than 2 based on the total rating of the high voltage transformers.« less

  19. Implementation of a patient-facing genomic test report in the electronic health record using a web-application interface.

    PubMed

    Williams, Marc S; Kern, Melissa S; Lerch, Virginia R; Billet, Jonathan; Williams, Janet L; Moore, Gregory J

    2018-05-30

    Genomic medicine is emerging into clinical care. Communication of genetic laboratory results to patients and providers is hampered by the complex technical nature of the laboratory reports. This can lead to confusion and misinterpretation of the results resulting in inappropriate care. Patients usually do not receive a copy of the report leading to further opportunities for miscommunication. To address these problems, interpretive reports were created using input from the intended end users, patients and providers. This paper describes the technical development and deployment of the first patient-facing genomic test report (PGR) within an electronic health record (EHR) ecosystem using a locally developed standards-based web-application interface. A patient-facing genomic test report with a companion provider report was configured for implementation within the EHR using a locally developed software platform, COMPASS™. COMPASS™ is designed to manage secure data exchange, as well as patient and provider access to patient reported data capture and clinical display tools. COMPASS™ is built using a Software as a Service (SaaS) approach which exposes an API that apps can interact with. An authoring tool was developed that allowed creation of patient-specific PGRs and the accompanying provider reports. These were converted to a format that allowed them to be presented in the patient portal and EHR respectively using the existing COMPASS™ interface thus allowing patients, caregivers and providers access to individual reports designed for the intended end user. The PGR as developed was shown to enhance patient and provider communication around genomic results. It is built on current standards but is designed to support integration with other tools and be compatible with emerging opportunities such as SMART on FHIR. This approach could be used to support genomic return of results as the tool is scalable and generalizable.

  20. Economic analysis of standard interface modules for use with the multi-mission spacecraft, volume 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A preliminary technical and economic feasibility study was made of the use of Standardized Interstate Modules (SIM) to perform electual interfacing functions that were historically incorporated into sensors. Sensor interface functions that are capable of standardization from the set of missions planned for the NASA Multi-Mission Spacecraft (MMS) in the 1981 to 1985 time period were identified. The cost savings that could be achieved through the replacement of nonstandard sensor interface flight hardware that might be used in these missions with SIM were examined.

  1. Using ARINC 818 Avionics Digital Video Bus (ADVB) for military displays

    NASA Astrophysics Data System (ADS)

    Alexander, Jon; Keller, Tim

    2007-04-01

    ARINC 818 Avionics Digital Video Bus (ADVB) is a new digital video interface and protocol standard developed especially for high bandwidth uncompressed digital video. The first draft of this standard, released in January of 2007, has been advanced by ARINC and the aerospace community to meet the acute needs of commercial aviation for higher performance digital video. This paper analyzes ARINC 818 for use in military display systems found in avionics, helicopters, and ground vehicles. The flexibility of ARINC 818 for the diverse resolutions, grayscales, pixel formats, and frame rates of military displays is analyzed as well as the suitability of ARINC 818 to support requirements for military video systems including bandwidth, latency, and reliability. Implementation issues relevant to military displays are presented.

  2. Monolithic integration of GMR sensors for standard CMOS-IC current sensing

    NASA Astrophysics Data System (ADS)

    De Marcellis, A.; Reig, C.; Cubells-Beltrán, M.-D.; Madrenas, J.; Santos, J. D.; Cardoso, S.; Freitas, P. P.

    2017-09-01

    In this work we report on the development of Giant Magnetoresistive (GMR) sensors for off-line current measurements in standard integrated circuits. An ASIC has been specifically designed and fabricated in the well-known AMS-0.35 μm CMOS technology, including the electronic circuitry for sensor interfacing. It implements an oscillating circuit performing a voltage-to-frequency conversion. Subsequently, a fully CMOS-compatible low temperature post-process has been applied for depositing the GMR sensing devices in a full-bridge configuration onto the buried current straps. Sensitivity and resolution of these sensors have been investigated achieving experimental results that show a detection sensitivity of about 100 Hz/mA, with a resolution of about 5 μA.

  3. Novel conformal technique to reduce staircasing artifacts at material boundaries for FDTD modeling of the bioheat equation.

    PubMed

    Neufeld, E; Chavannes, N; Samaras, T; Kuster, N

    2007-08-07

    The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.

  4. FELIX: The new detector readout system for the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Ryu, Soo; ATLAS TDAQ Collaboration

    2017-10-01

    After the Phase-I upgrades (2019) of the ATLAS experiment, the Front-End Link eXchange (FELIX) system will be the interface between the data acquisition system and the detector front-end and trigger electronics. FELIX will function as a router between custom serial links and a commodity switch network using standard technologies (Ethernet or Infiniband) to communicate with commercial data collecting and processing components. The system architecture of FELIX will be described and the status of the firmware implementation and hardware development currently in progress will be presented.

  5. Porting of an FPGA Based High Data Rate DVB-S2 Modulator

    DTIC Science & Technology

    2011-06-13

    broadcast satellite market. The physical layer is detailed in the ETSI EN 302 307 V 1.1.2 (2006-06) standard. The waveform has seen broad adoption and...independent u IRRC Atar fi I I ii I .• DDS l; OAC Interface ~ (opCIontJ) " " 7 a RRC Filler V; ~ implementation, and one from Xilinx, which is...at 37- 38 is shown in Fignre 6. Additionally, the HDR DVB-S2 waveform running on the BDR-I was tested for interoperability at the physical layer

  6. The Effect of Prosthetic Socket Interface Design on Socket Comfort, Residual Limb Health, and Function for the Transfemoral Amputee

    DTIC Science & Technology

    2017-10-01

    significantly lower trim lines, without ischial containment compared with a traditional interface. However, these alternative designs could compromise...overall function compared to the standard of care interface design . Therefore the focus of this clinical trial is to determine if the DS and Sub-I...alternative interface designs will improve socket comfort, residual limb health and function compared to the standard of care IRC interface design . 15

  7. Advanced aerosense display interfaces

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.; Meyer, Frederick M.

    1998-09-01

    High-resolution display technologies are being developed to meet the ever-increasing demand for realistic detail. The requirement for evermore visual information exceeds the capacity of fielded aerospace display interfaces. In this paper we begin an exploration of display interfaces and evolving aerospace requirements. Current and evolving standards for avionics, commercial, and flat panel displays are summarized and compared to near term goals for military and aerospace applications. Aerospace and military applications prior to 2005 up to UXGA and digital HDTV resolution can be met by using commercial interface standard developments. Advanced aerospace requirements require yet higher resolutions (2560 X 2048 color pixels, 5120 X 4096 color pixels at 85 Hz, etc.) and necessitate the initiation of discussion herein of an 'ultra digital interface standard (UDIS)' which includes 'smart interface' features such as large memory and blazingly fast resizing microcomputer. Interface capacity, IT, increased about 105 from 1973 to 1998; 102 more is needed for UDIS.

  8. Revisiting Training and Verification Process Implementation for Risk Reduction on New Missions at NASA Jet Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Bryant, Larry W.; Fragoso, Ruth S.

    2007-01-01

    In 2003 we proposed an effort to develop a core program of standardized training and verification practices and standards against which the implementation of these practices could be measured. The purpose was to provide another means of risk reduction for deep space missions to preclude the likelihood of a repeat of the tragedies of the 1998 Mars missions. We identified six areas where the application of standards and standardization would benefit the overall readiness process for flight projects at JPL. These are Individual Training, Team Training, Interface and Procedure Development, Personnel Certification, Interface and procedure Verification, and Operations Readiness Testing. In this paper we will discuss the progress that has been made in the tasks of developing the proposed infrastructure in each of these areas. Specifically we will address the Position Training and Certification Standards that are now available for each operational position found on our Flight Operations Teams (FOT). We will also discuss the MGSS Baseline Flight Operations Team Training Plan which can be tailored for each new flight project at JPL. As these tasks have been progressing, the climate and emphasis for Training and for V and V at JPL has changed, and we have learned about the expansion, growth, and limitations in the roles of traditional positions at JPL such as the Project's Training Engineer, V and V Engineer, and Operations Engineer. The need to keep a tight rein on budgets has led to a merging and/or reduction in these positions which pose challenges to individual capacities and capabilities. We examine the evolution of these processes and the roles involved while taking a look at the impact or potential impact of our proposed training related infrastructure tasks. As we conclude our examination of the changes taking place for new flight projects, we see that the importance of proceeding with our proposed tasks and adapting them to the changing climate remains an important element in reducing the risk in the challenging business of space exploration.

  9. Embedded Web Technology: Applying World Wide Web Standards to Embedded Systems

    NASA Technical Reports Server (NTRS)

    Ponyik, Joseph G.; York, David W.

    2002-01-01

    Embedded Systems have traditionally been developed in a highly customized manner. The user interface hardware and software along with the interface to the embedded system are typically unique to the system for which they are built, resulting in extra cost to the system in terms of development time and maintenance effort. World Wide Web standards have been developed in the passed ten years with the goal of allowing servers and clients to intemperate seamlessly. The client and server systems can consist of differing hardware and software platforms but the World Wide Web standards allow them to interface without knowing about the details of system at the other end of the interface. Embedded Web Technology is the merging of Embedded Systems with the World Wide Web. Embedded Web Technology decreases the cost of developing and maintaining the user interface by allowing the user to interface to the embedded system through a web browser running on a standard personal computer. Embedded Web Technology can also be used to simplify an Embedded System's internal network.

  10. High density, multi-range analog output Versa Module Europa board for control system applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Kundan, E-mail: kundan@iuac.res.in; Das, Ajit Lal

    2014-01-15

    A new VMEDAC64, 12-bit 64 channel digital-to-analog converter, a Versa Module Europa (VME) module, features 64 analog voltage outputs with user selectable multiple ranges, has been developed for control system applications at Inter University Accelerator Centre. The FPGA (Field Programmable Gate Array) is the module's core, i.e., it implements the DAC control logic and complexity of VMEbus slave interface logic. The VMEbus slave interface and DAC control logic are completely designed and implemented on a single FPGA chip to achieve high density of 64 channels in a single width VME module and will reduce the module count in the controlmore » system applications, and hence will reduce the power consumption and cost of overall system. One of our early design goals was to develop the VME interface such that it can be easily integrated with the peripheral devices and satisfy the timing specifications of VME standard. The modular design of this module reduces the amount of time required to develop other custom modules for control system. The VME slave interface is written as a single component inside FPGA which will be used as a basic building block for any VMEbus interface project. The module offers multiple output voltage ranges depending upon the requirement. The output voltage range can be reduced or expanded by writing range selection bits in the control register. The module has programmable refresh rate and by default hold capacitors in the sample and hold circuit for each channel are charged periodically every 7.040 ms (i.e., update frequency 284 Hz). Each channel has software controlled output switch which disconnects analog output from the field. The modularity in the firmware design on FPGA makes the debugging very easy. On-board DC/DC converters are incorporated for isolated power supply for the analog section of the board.« less

  11. Parallel Execution of Functional Mock-up Units in Buildings Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozmen, Ozgur; Nutaro, James J.; New, Joshua Ryan

    2016-06-30

    A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported asmore » a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.« less

  12. A unified approach for composite cost reporting and prediction in the ACT program

    NASA Technical Reports Server (NTRS)

    Freeman, W. Tom; Vosteen, Louis F.; Siddiqi, Shahid

    1991-01-01

    The Structures Technology Program Office (STPO) at NASA Langley Research Center has held two workshops with representatives from the commercial airframe companies to establish a plan for development of a standard cost reporting format and a cost prediction tool for conceptual and preliminary designers. This paper reviews the findings of the workshop representatives with a plan for implementation of their recommendations. The recommendations of the cost tracking and reporting committee will be implemented by reinstituting the collection of composite part fabrication data in a format similar to the DoD/NASA Structural Composites Fabrication Guide. The process of data collection will be automated by taking advantage of current technology with user friendly computer interfaces and electronic data transmission. Development of a conceptual and preliminary designers' cost prediction model will be initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design (CAD) programs is assessed.

  13. Fractional-N phase-locked loop for split and direct automatic frequency control in A-GPS

    NASA Astrophysics Data System (ADS)

    Park, Chester Sungchung; Park, Sungkyung

    2018-07-01

    A low-power mixed-signal phase-locked loop (PLL) is modelled and designed for the DigRF interface between the RF chip and the modem chip. An assisted-GPS or A-GPS multi-standard system includes the DigRF interface and uses the split automatic frequency control (AFC) technique. The PLL circuitry uses the direct AFC technique and is based on the fractional-N architecture using a digital delta-sigma modulator along with a digital counter, fulfilling simple ultra-high-resolution AFC with robust digital circuitry and its timing. Relative to the output frequency, the measured AFC resolution or accuracy is <5 parts per billion (ppb) or on the order of a Hertz. The cycle-to-cycle rms jitter is <6 ps and the typical settling time is <30 μs. A spur reduction technique is adopted and implemented as well, demonstrating spur reduction without employing dithering. The proposed PLL includes a low-leakage phase-frequency detector, a low-drop-out regulator, power-on-reset circuitry and precharge circuitry. The PLL is implemented in a 90-nm CMOS process technology with 1.2 V single supply. The overall PLL draws about 1.1 mA from the supply.

  14. Low-cost and high-speed optical mark reader based on an intelligent line camera

    NASA Astrophysics Data System (ADS)

    Hussmann, Stephan; Chan, Leona; Fung, Celine; Albrecht, Martin

    2003-08-01

    Optical Mark Recognition (OMR) is thoroughly reliable and highly efficient provided that high standards are maintained at both the planning and implementation stages. It is necessary to ensure that OMR forms are designed with due attention to data integrity checks, the best use is made of features built into the OMR, used data integrity is checked before the data is processed and data is validated before it is processed. This paper describes the design and implementation of an OMR prototype system for marking multiple-choice tests automatically. Parameter testing is carried out before the platform and the multiple-choice answer sheet has been designed. Position recognition and position verification methods have been developed and implemented in an intelligent line scan camera. The position recognition process is implemented into a Field Programmable Gate Array (FPGA), whereas the verification process is implemented into a micro-controller. The verified results are then sent to the Graphical User Interface (GUI) for answers checking and statistical analysis. At the end of the paper the proposed OMR system will be compared with commercially available system on the market.

  15. European Multidisciplinary seafloor and the Observatory of the water column for Development; The setup of an interoperable Generic Sensor Module

    NASA Astrophysics Data System (ADS)

    Danobeitia, J.; Oscar, G.; Bartolomé, R.; Sorribas, J.; Del Rio, J.; Cadena, J.; Toma, D. M.; Bghiel, I.; Martinez, E.; Bardaji, R.; Piera, J.; Favali, P.; Beranzoli, L.; Rolin, J. F.; Moreau, B.; Andriani, P.; Lykousis, V.; Hernandez Brito, J.; Ruhl, H.; Gillooly, M.; Terrinha, P.; Radulescu, V.; O'Neill, N.; Best, M.; Marinaro, G.

    2016-12-01

    European Multidisciplinary seafloor and the Observatory of the water column for Development (EMSODEV) is a Horizon-2020 UE project whose overall objective is the operationalization of eleven marine observatories and four test sites distributed throughout Europe, from the Arctic to the Atlantic, from the Mediterranean to the Black Sea. The whole infrastructure is managed by the European consortium EMSO-ERIC (European Research Infrastructure Consortium) with the participation of 8 European countries and other partner countries. Now, we are implementing a Generic Sensor Module (EGIM) within the EMSO ERIC distributed marine research infrastructure. Our involvement is mainly on developing standard-compliant generic software for Sensor Web Enablement (SWE) on EGIM device. The main goal of this development is to support the sensors data acquisition on a new interoperable EGIM system. The EGIM software structure is made up of one acquisition layer located between the recorded data at EGIM module and the data management services. Therefore, two main interfaces are implemented: first, assuring the EGIM hardware acquisition and second allowing push and pull data from data management layer (Sensor Web Enable standard compliant). All software components used are Open source licensed and has been configured to manage different roles on the whole system (52º North SOS Server, Zabbix Monitoring System). The acquisition data module has been implemented with the aim to join all components for EGIM data acquisition and server fulfilling SOS standards interface. The system is already achieved awaiting for the first laboratory bench test and shallow water test connection to the OBSEA node, offshore Vilanova I la Geltrú (Barcelona, Spain). The EGIM module will record a wide range of ocean parameters in a long-term consistent, accurate and comparable manner from disciplines such as biology, geology, chemistry, physics, engineering, and computer science, from polar to subtropical environments, through the water column down to the deep sea. The measurements recorded along EMSO NODES are critical to respond accurately to the social and scientific challenges such as climate change, changes in marine ecosystems, and marine hazards.

  16. Update Of The ACR-NEMA Standard Committee

    NASA Astrophysics Data System (ADS)

    Wang, Yen; Best, D. E.; Morse, R. R.; Horii, S. C.; Lehr, J. L.; Lodwick, G. S.; Fuscoe, C.; Nelson, O. L.; Perry, J. R.; Thompson, B. G.; Wessell, W. R.

    1988-06-01

    In January, 1984, the American College of Radiology (ACR) representing the users of imaging equipment and the National Electrical Manufacturers Association (NEMA) representing the manufacturers of imaging equipment joined forces to create a committee that could solve the compatibility issues surrounding the exchange of digital medical images. This committee, the ACR-NEMA Digital Imaging and Communication Standards Committee was composed of radiologists and experts from industry who addressed the problems involved in interfacing different digital imaging modalities. In just two years, the committee and three of its working groups created an industry standard interface, ACR-NEMA Digital Imaging and Communications Standard, Publication No. 300-1985. The ACR-NEMA interface allows digital medical images and related information to be communicated between different imaging devices, regardless of manufacturer or use of differing image formats. The interface is modeled on the International Standards Organization's Open Systems Interconnection sever-layer reference model. It is believed that the development of the Interface was the first step in the development of standards for Medical Picture Archiving and Communications Systems (PACS). Developing the interface Standard has required intensive technical analysis and examination of the future trends for digital imaging in order to design a model which would not be quickly outmoded. To continue the enhancement and future development of image management systems, various working groups have been created under the direction of the ACR-NEMA Committee.

  17. Interface standards for computer equipment

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The ability to configure data systems using modules provided by independent manufacturers is complicated by the wide range of electrical, mechanical, and functional characteristics exhibited within the equipment provided by different manufacturers of computers, peripherals, and terminal devices. A number of international organizations were and still are involved in the creation of standards that enable devices to be interconnected with minimal difficulty, usually involving only a cable or data bus connection that is defined by the standard. The elements covered by an interface standard are covered and the most prominent interface standards presently in use are identified and described.

  18. CAD/CAE Integration Enhanced by New CAD Services Standard

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.

    2002-01-01

    A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.

  19. User interfaces for computational science: A domain specific language for OOMMF embedded in Python

    NASA Astrophysics Data System (ADS)

    Beg, Marijan; Pepper, Ryan A.; Fangohr, Hans

    2017-05-01

    Computer simulations are used widely across the engineering and science disciplines, including in the research and development of magnetic devices using computational micromagnetics. In this work, we identify and review different approaches to configuring simulation runs: (i) the re-compilation of source code, (ii) the use of configuration files, (iii) the graphical user interface, and (iv) embedding the simulation specification in an existing programming language to express the computational problem. We identify the advantages and disadvantages of different approaches and discuss their implications on effectiveness and reproducibility of computational studies and results. Following on from this, we design and describe a domain specific language for micromagnetics that is embedded in the Python language, and allows users to define the micromagnetic simulations they want to carry out in a flexible way. We have implemented this micromagnetic simulation description language together with a computational backend that executes the simulation task using the Object Oriented MicroMagnetic Framework (OOMMF). We illustrate the use of this Python interface for OOMMF by solving the micromagnetic standard problem 4. All the code is publicly available and is open source.

  20. Using IHE and HL7 conformance to specify consistent PACS interoperability for a large multi-center enterprise.

    PubMed

    Henderson, Michael L; Dayhoff, Ruth E; Titton, Csaba P; Casertano, Andrew

    2006-01-01

    As part of its patient care mission, the U.S. Veterans Health Administration performs diagnostic imaging procedures at 141 medical centers and 850 outpatient clinics. VHA's VistA Imaging Package provides a full archival, display, and communications infrastructure and interfaces to radiology and other HIS modules as well as modalities and a worklist provider In addition, various medical center entities within VHA have elected to install commercial picture archiving and communications systems to enable image organization and interpretation. To evaluate interfaces between commercial PACS, the VistA hospital information system, and imaging modalities, VHA has built a fully constrained specification that is based on the Radiology Technical Framework (Rad-TF) Integrating the Healthcare Enterprise. The Health Level Seven normative conformance mechanism was applied to the IHE Rad-TF and agency requirements to arrive at a baseline set of message specifications. VHA provides a thorough implementation and testing process to promote the adoption of standards-based interoperability by all PACS vendors that want to interface with VistA Imaging.

  1. A WBAN System for Ambulatory Monitoring of Physical Activity and Health Status: Applications and Challenges.

    PubMed

    Jovanov, E; Milenkovic, A; Otto, C; De Groen, P; Johnson, B; Warren, S; Taibi, G

    2005-01-01

    Recent technological advances in sensors, low-power integrated circuits, and wireless communications have enabled the design of low-cost, miniature, lightweight, intelligent physiological sensor platforms that can be seamlessly integrated into a body area network for health monitoring. Wireless body area networks (WBANs) promise unobtrusive ambulatory health monitoring for extended periods of time and near real-time updates of patients' medical records through the Internet. A number of innovative systems for health monitoring have recently been proposed. However, they typically rely on custom communication protocols and hardware designs, lacking generality and flexibility. The lack of standard platforms, system software support, and standards makes these systems expensive. Bulky sensors, high price, and frequent battery changes are all likely to limit user compliance. To address some of these challenges, we prototyped a WBAN utilizing a common off-the-shelf wireless sensor platform with a ZigBee-compliant radio interface and an ultra low-power microcontroller. The standard platform interfaces to custom sensor boards that are equipped with accelerometers for motion monitoring and a bioamplifier for electrocardiogram or electromyogram monitoring. Software modules for on-board processing, communication, and network synchronization have been developed using the TinyOS operating system. Although the initial WBAN prototype targets ambulatory monitoring of user activity, the developed sensors can easily be adapted to monitor other physiological parameters. In this paper, we discuss initial results, implementation challenges, and the need for standardization in this dynamic and promising research field.

  2. Validation results of specifications for motion control interoperability

    NASA Astrophysics Data System (ADS)

    Szabo, Sandor; Proctor, Frederick M.

    1997-01-01

    The National Institute of Standards and Technology (NIST) is participating in the Department of Energy Technologies Enabling Agile Manufacturing (TEAM) program to establish interface standards for machine tool, robot, and coordinate measuring machine controllers. At NIST, the focus is to validate potential application programming interfaces (APIs) that make it possible to exchange machine controller components with a minimal impact on the rest of the system. This validation is taking place in the enhanced machine controller (EMC) consortium and is in cooperation with users and vendors of motion control equipment. An area of interest is motion control, including closed-loop control of individual axes and coordinated path planning. Initial tests of the motion control APIs are complete. The APIs were implemented on two commercial motion control boards that run on two different machine tools. The results for a baseline set of APIs look promising, but several issues were raised. These include resolving differing approaches in how motions are programmed and defining a standard measurement of performance for motion control. This paper starts with a summary of the process used in developing a set of specifications for motion control interoperability. Next, the EMC architecture and its classification of motion control APIs into two classes, Servo Control and Trajectory Planning, are reviewed. Selected APIs are presented to explain the basic functionality and some of the major issues involved in porting the APIs to other motion controllers. The paper concludes with a summary of the main issues and ways to continue the standards process.

  3. Reliable Transport over SpaceWire for James Webb Space Telescope (JWST) Focal Plane Electronics (FPE) Network

    NASA Technical Reports Server (NTRS)

    Rakow, Glenn; Schnurr, Richard; Dailey, Christopher; Shakoorzadeh, Kamdin

    2003-01-01

    NASA's James Webb Space Telescope (JWST) faces difficult technical and budgetary challenges to overcome before it is scheduled launch in 2010. The Integrated Science Instrument Module (ISIM), shares these challenges. The major challenge addressed in this paper is the data network used to collect, process, compresses and store Infrared data. A total of 114 Mbps of raw information must be collected from 19 sources and delivered to the two redundant data processing units across a twenty meter deployed thermally restricted interface. Further data must be transferred to the solid-state recorder and the spacecraft. The JWST detectors are kept at cryogenic temperatures to obtain the sensitivity necessary to measure faint energy sources. The Focal Plane Electronics (FPE) that sample the detector, generate packets from the samples, and transmit these packets to the processing electronics must dissipate little power in order to help keep the detectors at these cold temperatures. Separating the low powered front-end electronics from the higher-powered processing electronics, and using a simple high-speed protocol to transmit the detector data minimize the power dissipation near the detectors. Low Voltage Differential Signaling (LVDS) drivers were considered an obvious choice for physical layer because of their high speed and low power. The mechanical restriction on the number cables across the thermal interface force the Image packets to be concentrated upon two high-speed links. These links connect the many image packet sources, Focal Plane Electronics (FPE), located near the cryogenic detectors to the processing electronics on the spacecraft structure. From 12 to 10,000 seconds of raw data are processed to make up an image, various algorithms integrate the pixel data Loss of commands to configure the detectors as well as the loss of science data itself may cause inefficiency in the use of the telescope that are unacceptable given the high cost of the observatory. This combination of requirements necessitates a redundant, fault tolerant, high- speed, low mass, low power network with a low Bit error Rate(1E-9- 1E-12). The ISIM systems team performed many studies of the various network architectures that meeting these requirements. The architecture selected uses the Spacewire protocol, with the addition of a new transport and network layer added to implement end-to-end reliable transport. The network and reliable transport mechanism must be implemented in hardware because of the high average information rate and the restriction on the ability of the detectors to buffer data due to power and size restrictions. This network and transport mechanism was designed to be compatible with existing Spacewire links and routers so that existing equipment and designs may be leveraged upon. The transport layer specification is being coordinated with European Space Agency (ESA), Spacewire Working Group and the Consultative Committee for Space Data System (CCSDS) PlK Standard Onboard Interface (SOIF) panel, with the intent of developing a standard for reliable transport for Spacewire. Changes to the protocol presented are likely since negotiations are ongoing with these groups. A block of RTL VHDL that implements a multi-port Spacewire router with an external user interface will be developed and integrated with an existing Spacewire Link design. The external user interface will be the local interface that sources and sinks packets onto and off of the network (Figure 3). The external user interface implements the network and transport layer and handles acknowledgements and re-tries of packets for reliable transport over the network. Because the design is written in RTL, it may be ported to any technology but will initially be targeted to the new Actel Accelerator series (AX) part. Each link will run at 160 Mbps and the power will be about 0.165 Watt per link worst case in the Actel AX.

  4. Spatial Brain Control Interface using Optical and Electrophysiological Measures

    DTIC Science & Technology

    2013-08-27

    appropriate for implementing a reliable brain-computer interface ( BCI ). The LSVM method 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 27-08-2013 13...Machine (LSVM) was the most appropriate for implementing a reliable brain-computer interface ( BCI ). The LSVM method was applied to the imaging data...local field potentials proved to be fast and strongly tuned for the spatial parameters of the task. Thus, a reliable BCI that can predict upcoming

  5. A SOA broker solution for standard discovery and access services: the GI-cat framework

    NASA Astrophysics Data System (ADS)

    Boldrini, Enrico

    2010-05-01

    GI-cat ideal users are data providers or service providers within the geoscience community. The former have their data already available through an access service (e.g. an OGC Web Service) and would have it published through a standard catalog service, in a seamless way. The latter would develop a catalog broker and let users query and access different geospatial resources through one or more standard interfaces and Application Profiles (AP) (e.g. OGC CSW ISO AP, CSW ebRIM/EO AP, etc.). GI-cat actually implements a broker components (i.e. a middleware service) which carries out distribution and mediation functionalities among "well-adopted" catalog interfaces and data access protocols. GI-cat also publishes different discovery interfaces: the OGC CSW ISO and ebRIM Application Profiles (the latter coming with support for the EO and CIM extension packages) and two different OpenSearch interfaces developed in order to explore Web 2.0 possibilities. An extended interface is also available to exploit all available GI-cat features, such as interruptible incremental queries and queries feedback. Interoperability tests performed in the context of different projects have also pointed out the importance to enforce compatibility with existing and wide-spread tools of the open source community (e.g. GeoNetwork and Deegree catalogs), which was then achieved. Based on a service-oriented framework of modular components, GI-cat can effectively be customized and tailored to support different deployment scenarios. In addition to the distribution functionality an harvesting approach has been lately experimented, allowing the user to switch between a distributed and a local search giving thus more possibilities to support different deployment scenarios. A configurator tool is available in order to enable an effective high level configuration of the broker service. A specific geobrowser was also naturally developed, for demonstrating the advanced GI-cat functionalities. This client, called GI-go, is an example of the possible applications which may be built on top of the GI-cat broker component. GI-go allows discovering and browsing of the available datasets, retrieving and evaluating their description and performing distributed queries according to any combination of the following criteria: geographic area, temporal interval, topic of interest (free-text and/or keyword selection are allowed) and data source (i.e. where, when, what, who). The results set of a query (e.g. datasets metadata) are then displayed in an incremental way leveraging the asynchronous interactions approach implemented by GI-cat. This feature allows the user to access the intermediate query results. Query interruption and feedback features are also provided to the user. Alternatively, user may perform a browsing task by selecting a catalog resource from the current configuration and navigate through its aggregated and/or leaf datasets. In both cases datasets metadata, expressed according to ISO 19139 (and also Dublin Core and ebRIM if available), are displayed for download, along with a resource portrayal and actual data access (when this is meaningful and possible). The GI-cat distributed catalog service has been successfully deployed and experimented in the framework of different projects and initiative, including the SeaDataNet FP6 project, GEOSS IP3 (Interoperability Process Pilot Project), GEOSS AIP-2 (Architectural Implementation Project - Phase 2), FP7 GENESI-DR, CNR GIIDA, FP7 EUROGEOSS and ESA HMA project.

  6. PC-based Multiple Information System Interface (PC/MISI) detailed design and implementation plan

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Hall, Philip P.

    1985-01-01

    The design plan for the personal computer multiple information system interface (PC/MISI) project is discussed. The document is intended to be used as a blueprint for the implementation of the system. Each component is described in the detail necessary to allow programmers to implement the system. A description of the system data flow and system file structures is given.

  7. Software Model Checking of ARINC-653 Flight Code with MCP

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud

    2010-01-01

    The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.

  8. CFD Analysis and Design Optimization Using Parallel Computers

    NASA Technical Reports Server (NTRS)

    Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James

    1997-01-01

    A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.

  9. Automated subsystems control development. [for life support systems of space station

    NASA Technical Reports Server (NTRS)

    Block, R. F.; Heppner, D. B.; Samonski, F. H., Jr.; Lance, N., Jr.

    1985-01-01

    NASA has the objective to launch a Space Station in the 1990s. It has been found that the success of the Space Station engineering development, the achievement of initial operational capability (IOC), and the operation of a productive Space Station will depend heavily on the implementation of an effective automation and control approach. For the development of technology needed to implement the required automation and control function, a contract entitled 'Automated Subsystems Control for Life Support Systems' (ASCLSS) was awarded to two American companies. The present paper provides a description of the ASCLSS program. Attention is given to an automation and control architecture study, a generic automation and control approach for hardware demonstration, a standard software approach, application of Air Revitalization Group (ARG) process simulators, and a generic man-machine interface.

  10. MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

    NASA Astrophysics Data System (ADS)

    Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.

    2018-02-01

    We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.

  11. Experimental research control software system

    NASA Astrophysics Data System (ADS)

    Cohn, I. A.; Kovalenko, A. G.; Vystavkin, A. N.

    2014-05-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  12. Marine Profiles for OGC Sensor Web Enablement Standards

    NASA Astrophysics Data System (ADS)

    Jirka, Simon

    2016-04-01

    The use of OGC Sensor Web Enablement (SWE) standards in oceanology is increasing. Several projects are developing SWE-based infrastructures to ease the sharing of marine sensor data. This work ranges from developments on sensor level to efforts addressing interoperability of data flows between observatories and organisations. The broad range of activities using SWE standards leads to a risk of diverging approaches how the SWE specifications are applied. Because the SWE standards are designed in a domain independent manner, they intentionally offer a high degree of flexibility enabling implementation across different domains and usage scenarios. At the same time this flexibility allows one to achieve similar goals in different ways. To avoid interoperability issues, an agreement is needed on how to apply SWE concepts and how to use vocabularies in a common way that will be shared by different projects, implementations, and users. To address this need, partners from several projects and initiatives (AODN, BRIDGES, envri+, EUROFLEETS/EUROFLEETS2, FixO3, FRAM, IOOS, Jerico/Jerico-Next, NeXOS, ODIP/ODIP II, RITMARE, SeaDataNet, SenseOcean, X-DOMES) have teamed up to develop marine profiles of OGC SWE standards that can serve as a common basis for developments in multiple projects and organisations. The following aspects will be especially considered: 1.) Provision of metadata: For discovering sensors/instruments as well as observation data, to facilitate the interpretation of observations, and to integrate instruments in sensor platforms, the provision of metadata is crucial. Thus, a marine profile of the OGC Sensor Model Language 2.0 (SensorML 2.0) will be developed allowing to provide metadata for different levels (e.g. observatory, instrument, and detector) and sensor types. The latter will enable metadata of a specific type to be automatically inherited by all devices/sensors of the same type. The application of further standards such as OGC PUCK will benefit from this encoding, too, by facilitating the communication with instruments. 2.) Encoding and modelling of observation data: For delivering observation data, the ISO/OGC Observations and Measurements 2.0 (O&M 2.0) standard serves as a good basis. Within an O&M profile, recommendations will be given on needed observation types that cover different aspects of marine sensing (trajectory, stationary, or profile measurements, etc.). Besides XML, further O&M encodings (e.g. JSON-based) will be considered. 3.) Data access: A profile of the OGC Sensor Observation Service 2.0 (SOS 2.0) standard will be specified to offer a common way on how this web service interface can be used for requesting marine observations and metadata. At the same time this will offer a common interface to cross-domain applications based upon tools such as the GEOSS DAB. Lightweight approaches such as REST will be considered as further bindings for the SOS interface. 4.) Backward compatibility: The profile will consider the existing observation systems so that migration paths towards the specified profiles can be offered. We will present the current state of the profile development. In particular, a comparative analysis of SWE usage in different projects, an outline of the requirements, and fundamental aspects of profiles of SWE standards will be shown.

  13. Integrated System Health Management: Pilot Operational Implementation in a Rocket Engine Test Stand

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Schmalzel, John L.; Morris, Jonathan A.; Turowski, Mark P.; Franzl, Richard

    2010-01-01

    This paper describes a credible implementation of integrated system health management (ISHM) capability, as a pilot operational system. Important core elements that make possible fielding and evolution of ISHM capability have been validated in a rocket engine test stand, encompassing all phases of operation: stand-by, pre-test, test, and post-test. The core elements include an architecture (hardware/software) for ISHM, gateways for streaming real-time data from the data acquisition system into the ISHM system, automated configuration management employing transducer electronic data sheets (TEDS?s) adhering to the IEEE 1451.4 Standard for Smart Sensors and Actuators, broadcasting and capture of sensor measurements and health information adhering to the IEEE 1451.1 Standard for Smart Sensors and Actuators, user interfaces for management of redlines/bluelines, and establishment of a health assessment database system (HADS) and browser for extensive post-test analysis. The ISHM system was installed in the Test Control Room, where test operators were exposed to the capability. All functionalities of the pilot implementation were validated during testing and in post-test data streaming through the ISHM system. The implementation enabled significant improvements in awareness about the status of the test stand, and events and their causes/consequences. The architecture and software elements embody a systems engineering, knowledge-based approach; in conjunction with object-oriented environments. These qualities are permitting systematic augmentation of the capability and scaling to encompass other subsystems.

  14. eQuilibrator--the biochemical thermodynamics calculator.

    PubMed

    Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron

    2012-01-01

    The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like 'how much Gibbs energy is released by ATP hydrolysis at pH 5?' are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use.

  15. What will the future of cloud-based astronomical data processing look like?

    NASA Astrophysics Data System (ADS)

    Green, Andrew W.; Mannering, Elizabeth; Harischandra, Lloyd; Vuong, Minh; O'Toole, Simon; Sealey, Katrina; Hopkins, Andrew M.

    2017-06-01

    Astronomy is rapidly approaching an impasse: very large datasets require remote or cloud-based parallel processing, yet many astronomers still try to download the data and develop serial code locally. Astronomers understand the need for change, but the hurdles remain high. We are developing a data archive designed from the ground up to simplify and encourage cloud-based parallel processing. While the volume of data we host remains modest by some standards, it is still large enough that download and processing times are measured in days and even weeks. We plan to implement a python based, notebook-like interface that automatically parallelises execution. Our goal is to provide an interface sufficiently familiar and user-friendly that it encourages the astronomer to run their analysis on our system in the cloud-astroinformatics as a service. We describe how our system addresses the approaching impasse in astronomy using the SAMI Galaxy Survey as an example.

  16. X-Windows Socket Widget Class

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.

    2006-01-01

    The X-Windows Socket Widget Class ("Class" is used here in the object-oriented-programming sense of the word) was devised to simplify the task of implementing network connections for graphical-user-interface (GUI) computer programs. UNIX Transmission Control Protocol/Internet Protocol (TCP/IP) socket programming libraries require many method calls to configure, operate, and destroy sockets. Most X Windows GUI programs use widget sets or toolkits to facilitate management of complex objects. The widget standards facilitate construction of toolkits and application programs. The X-Windows Socket Widget Class encapsulates UNIX TCP/IP socket-management tasks within the framework of an X Windows widget. Using the widget framework, X Windows GUI programs can treat one or more network socket instances in the same manner as that of other graphical widgets, making it easier to program sockets. Wrapping ISP socket programming libraries inside a widget framework enables a programmer to treat a network interface as though it were a GUI.

  17. eQuilibrator—the biochemical thermodynamics calculator

    PubMed Central

    Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron

    2012-01-01

    The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like ‘how much Gibbs energy is released by ATP hydrolysis at pH 5?’ are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use. PMID:22064852

  18. A CMOS Humidity Sensor for Passive RFID Sensing Applications

    PubMed Central

    Deng, Fangming; He, Yigang; Zhang, Chaolong; Feng, Wei

    2014-01-01

    This paper presents a low-cost low-power CMOS humidity sensor for passive RFID sensing applications. The humidity sensing element is implemented in standard CMOS technology without any further post-processing, which results in low fabrication costs. The interface of this humidity sensor employs a PLL-based architecture transferring sensor signal processing from the voltage domain to the frequency domain. Therefore this architecture allows the use of a fully digital circuit, which can operate on ultra-low supply voltage and thus achieves low-power consumption. The proposed humidity sensor has been fabricated in the TSMC 0.18 μm CMOS process. The measurements show this humidity sensor exhibits excellent linearity and stability within the relative humidity range. The sensor interface circuit consumes only 1.05 μW at 0.5 V supply voltage and reduces it at least by an order of magnitude compared to previous designs. PMID:24841250

  19. A CMOS humidity sensor for passive RFID sensing applications.

    PubMed

    Deng, Fangming; He, Yigang; Zhang, Chaolong; Feng, Wei

    2014-05-16

    This paper presents a low-cost low-power CMOS humidity sensor for passive RFID sensing applications. The humidity sensing element is implemented in standard CMOS technology without any further post-processing, which results in low fabrication costs. The interface of this humidity sensor employs a PLL-based architecture transferring sensor signal processing from the voltage domain to the frequency domain. Therefore this architecture allows the use of a fully digital circuit, which can operate on ultra-low supply voltage and thus achieves low-power consumption. The proposed humidity sensor has been fabricated in the TSMC 0.18 μm CMOS process. The measurements show this humidity sensor exhibits excellent linearity and stability within the relative humidity range. The sensor interface circuit consumes only 1.05 µW at 0.5 V supply voltage and reduces it at least by an order of magnitude compared to previous designs.

  20. FreeSASA: An open source C library for solvent accessible surface area calculations.

    PubMed

    Mitternacht, Simon

    2016-01-01

    Calculating solvent accessible surface areas (SASA) is a run-of-the-mill calculation in structural biology. Although there are many programs available for this calculation, there are no free-standing, open-source tools designed for easy tool-chain integration. FreeSASA is an open source C library for SASA calculations that provides both command-line and Python interfaces in addition to its C API. The library implements both Lee and Richards' and Shrake and Rupley's approximations, and is highly configurable to allow the user to control molecular parameters, accuracy and output granularity. It only depends on standard C libraries and should therefore be easy to compile and install on any platform. The library is well-documented, stable and efficient. The command-line interface can easily replace closed source legacy programs, with comparable or better accuracy and speed, and with some added functionality.

  1. GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application

    NASA Technical Reports Server (NTRS)

    McGuire, Melissa L.; Kunkel, Matthew R.; Smith, David A.

    2010-01-01

    The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.

  2. Hardware Architecture Study for NASA's Space Software Defined Radios

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Scardelletti, Maximilian C.; Mortensen, Dale J.; Kacpura, Thomas J.; Andro, Monty; Smith, Carl; Liebetreu, John

    2008-01-01

    This study defines a hardware architecture approach for software defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general purpose processors, digital signal processors, field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) in addition to flexible and tunable radio frequency (RF) front-ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and and interfaces. The modules are a logical division of common radio functions that comprise a typical communication radio. This paper describes the architecture details, module definitions, and the typical functions on each module as well as the module interfaces. Trade-offs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify the internal physical implementation within each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.

  3. A Flexible Electronic Commerce Recommendation System

    NASA Astrophysics Data System (ADS)

    Gong, Songjie

    Recommendation systems have become very popular in E-commerce websites. Many of the largest commerce websites are already using recommender technologies to help their customers find products to purchase. An electronic commerce recommendation system learns from a customer and recommends products that the customer will find most valuable from among the available products. But most recommendation methods are hard-wired into the system and they support only fixed recommendations. This paper presented a framework of flexible electronic commerce recommendation system. The framework is composed by user model interface, recommendation engine, recommendation strategy model, recommendation technology group, user interest model and database interface. In the recommender strategy model, the method can be collaborative filtering, content-based filtering, mining associate rules method, knowledge-based filtering method or the mixed method. The system mapped the implementation and demand through strategy model, and the whole system would be design as standard parts to adapt to the change of the recommendation strategy.

  4. Sensory feedback in prosthetics: a standardized test bench for closed-loop control.

    PubMed

    Dosen, Strahinja; Markovic, Marko; Hartmann, Cornelia; Farina, Dario

    2015-03-01

    Closing the control loop by providing sensory feedback to the user of a prosthesis is an important challenge, with major impact on the future of prosthetics. Developing and comparing closed-loop systems is a difficult task, since there are many different methods and technologies that can be used to implement each component of the system. Here, we present a test bench developed in Matlab Simulink for configuring and testing the closed-loop human control system in standardized settings. The framework comprises a set of connected generic blocks with normalized inputs and outputs, which can be customized by selecting specific implementations from a library of predefined components. The framework is modular and extensible and it can be used to configure, compare and test different closed-loop system prototypes, thereby guiding the development towards an optimal system configuration. The use of the test bench was demonstrated by investigating two important aspects of closed-loop control: performance of different electrotactile feedback interfaces (spatial versus intensity coding) during a pendulum stabilization task and feedforward methods (joystick versus myocontrol) for force control. The first experiment demonstrated that in the case of trained subjects the intensity coding might be superior to spatial coding. In the second experiment, the control of force was rather poor even with a stable and precise control interface (joystick), demonstrating that inherent characteristics of the prosthesis can be an important limiting factor when considering the overall effectiveness of the closed-loop control. The presented test bench is an important instrument for investigating different aspects of human manual control with sensory feedback.

  5. Conceptual design of the X-IFU Instrument Control Unit on board the ESA Athena mission

    NASA Astrophysics Data System (ADS)

    Corcione, L.; Ligori, S.; Capobianco, V.; Bonino, D.; Valenziano, L.; Guizzo, G. P.

    2016-07-01

    Athena is one of L-class missions selected in the ESA Cosmic Vision 2015-2025 program for the science theme of the Hot and Energetic Universe. The Athena model payload includes the X-ray Integral Field Unit (X-IFU), an advanced actively shielded X-ray microcalorimeter spectrometer for high spectral resolution imaging, utilizing cooled Transition Edge Sensors. This paper describes the preliminary architecture of Instrument Control Unit (ICU), which is aimed at operating all XIFU's subsystems, as well as at implementing the main functional interfaces of the instrument with the S/C control unit. The ICU functions include the TC/TM management with S/C, science data formatting and transmission to S/C Mass Memory, housekeeping data handling, time distribution for synchronous operations and the management of the X-IFU components (i.e. CryoCoolers, Filter Wheel, Detector Readout Electronics Event Processor, Power Distribution Unit). ICU functions baseline implementation for the phase-A study foresees the usage of standard and Space-qualified components from the heritage of past and current space missions (e.g. Gaia, Euclid), which currently encompasses Leon2/Leon3 based CPU board and standard Space-qualified interfaces for the exchange commands and data between ICU and X-IFU subsystems. Alternative architecture, arranged around a powerful PowerPC-based CPU, is also briefly presented, with the aim of endowing the system with enhanced hardware resources and processing power capability, for the handling of control and science data processing tasks not defined yet at this stage of the mission study.

  6. Fabrication and characterization of resonant SOI micromechanical silicon sensors based on DRIE micromachining, freestanding release process and silicon direct bonding

    NASA Astrophysics Data System (ADS)

    Gigan, Olivier; Chen, Hua; Robert, Olivier; Renard, Stephane; Marty, Frederic

    2002-11-01

    This paper is dedicated to the fabrication and technological aspect of a silicon microresonator sensor. The entire project includes the fabrication processes, the system modelling/simulation, and the electronic interface. The mechanical model of such resonator is presented including description of frequency stability and Hysterises behaviour of the electrostatically driven resonator. Numeric model and FEM simulations are used to simulate the system dynamic behaviour. The complete fabrication process is based on standard microelectronics technology with specific MEMS technological steps. The key steps are described: micromachining on SOI by Deep Reactive Ion Etching (DRIE), specific release processes to prevent sticking (resist and HF-vapour release process) and collective vacuum encapsulation by Silicon Direct Bonding (SDB). The complete process has been validated and prototypes have been fabricated. The ASIC was designed to interface the sensor and to control the vibration amplitude. This electronic was simulated and designed to work up to 200°C and implemented in a standard 0.6μ CMOS technology. Characterizations of sensor prototypes are done both mechanically and electrostatically. These measurements showed good agreements with theory and FEM simulations.

  7. A multilevel Lab on chip platform for DNA analysis.

    PubMed

    Marasso, Simone Luigi; Giuri, Eros; Canavese, Giancarlo; Castagna, Riccardo; Quaglio, Marzia; Ferrante, Ivan; Perrone, Denis; Cocuzza, Matteo

    2011-02-01

    Lab-on-chips (LOCs) are critical systems that have been introduced to speed up and reduce the cost of traditional, laborious and extensive analyses in biological and biomedical fields. These ambitious and challenging issues ask for multi-disciplinary competences that range from engineering to biology. Starting from the aim to integrate microarray technology and microfluidic devices, a complex multilevel analysis platform has been designed, fabricated and tested (All rights reserved-IT Patent number TO2009A000915). This LOC successfully manages to interface microfluidic channels with standard DNA microarray glass slides, in order to implement a complete biological protocol. Typical Micro Electro Mechanical Systems (MEMS) materials and process technologies were employed. A silicon/glass microfluidic chip and a Polydimethylsiloxane (PDMS) reaction chamber were fabricated and interfaced with a standard microarray glass slide. In order to have a high disposable system all micro-elements were passive and an external apparatus provided fluidic driving and thermal control. The major microfluidic and handling problems were investigated and innovative solutions were found. Finally, an entirely automated DNA hybridization protocol was successfully tested with a significant reduction in analysis time and reagent consumption with respect to a conventional protocol.

  8. NEXUS Scalable and Distributed Next-Generation Avionics Bus for Space Missions

    NASA Technical Reports Server (NTRS)

    He, Yutao; Shalom, Eddy; Chau, Savio N.; Some, Raphael R.; Bolotin, Gary S.

    2011-01-01

    A paper discusses NEXUS, a common, next-generation avionics interconnect that is transparently compatible with wired, fiber-optic, and RF physical layers; provides a flexible, scalable, packet switched topology; is fault-tolerant with sub-microsecond detection/recovery latency; has scalable bandwidth from 1 Kbps to 10 Gbps; has guaranteed real-time determinism with sub-microsecond latency/jitter; has built-in testability; features low power consumption (< 100 mW per Gbps); is lightweight with about a 5,000-logic-gate footprint; and is implemented in a small Bus Interface Unit (BIU) with reconfigurable back-end providing interface to legacy subsystems. NEXUS enhances a commercial interconnect standard, Serial RapidIO, to meet avionics interconnect requirements without breaking the standard. This unified interconnect technology can be used to meet performance, power, size, and reliability requirements of all ranges of equipment, sensors, and actuators at chip-to-chip, board-to-board, or box-to-box boundary. Early results from in-house modeling activity of Serial RapidIO using VisualSim indicate that the use of a switched, high-performance avionics network will provide a quantum leap in spacecraft onboard science and autonomy capability for science and exploration missions.

  9. Human Factors Guidance for Control Room and Digital Human-System Interface Design and Modification, Guidelines for Planning, Specification, Design, Licensing, Implementation, Training, Operation and Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Fink, D. Hill, J. O'Hara

    2004-11-30

    Nuclear plant operators face a significant challenge designing and modifying control rooms. This report provides guidance on planning, designing, implementing and operating modernized control rooms and digital human-system interfaces.

  10. Modelling of plug and play interface for energy router based on IEC61850

    NASA Astrophysics Data System (ADS)

    Shi, Y. F.; Yang, F.; Gan, L.; He, H. L.

    2017-11-01

    Under the background of the “Internet Plus”, as the energy internet infrastructure equipment, energy router will be widely developed. The IEC61850 standard is the only universal standard in the field of power system automation which realizes the standardization of engineering operation of intelligent substation. To eliminate the lack of International unified standard for communication of energy router, this paper proposes to apply IEC61850 to plug and play interface and establishes the plug and play interface information model and information transfer services. This paper provides a research approach for the establishment of energy router communication standards, and promotes the development of energy router.

  11. adLIMS: a customized open source software that allows bridging clinical and basic molecular research studies.

    PubMed

    Calabria, Andrea; Spinozzi, Giulio; Benedicenti, Fabrizio; Tenderini, Erika; Montini, Eugenio

    2015-01-01

    Many biological laboratories that deal with genomic samples are facing the problem of sample tracking, both for pure laboratory management and for efficiency. Our laboratory exploits PCR techniques and Next Generation Sequencing (NGS) methods to perform high-throughput integration site monitoring in different clinical trials and scientific projects. Because of the huge amount of samples that we process every year, which result in hundreds of millions of sequencing reads, we need to standardize data management and tracking systems, building up a scalable and flexible structure with web-based interfaces, which are usually called Laboratory Information Management System (LIMS). We started collecting end-users' requirements, composed of desired functionalities of the system and Graphical User Interfaces (GUI), and then we evaluated available tools that could address our requirements, spanning from pure LIMS to Content Management Systems (CMS) up to enterprise information systems. Our analysis identified ADempiere ERP, an open source Enterprise Resource Planning written in Java J2EE, as the best software that also natively implements some highly desirable technological advances, such as the high usability and modularity that grants high use-case flexibility and software scalability for custom solutions. We extended and customized ADempiere ERP to fulfil LIMS requirements and we developed adLIMS. It has been validated by our end-users verifying functionalities and GUIs through test cases for PCRs samples and pre-sequencing data and it is currently in use in our laboratories. adLIMS implements authorization and authentication policies, allowing multiple users management and roles definition that enables specific permissions, operations and data views to each user. For example, adLIMS allows creating sample sheets from stored data using available exporting operations. This simplicity and process standardization may avoid manual errors and information backtracking, features that are not granted using track recording on files or spreadsheets. adLIMS aims to combine sample tracking and data reporting features with higher accessibility and usability of GUIs, thus allowing time to be saved on doing repetitive laboratory tasks, and reducing errors with respect to manual data collection methods. Moreover, adLIMS implements automated data entry, exploiting sample data multiplexing and parallel/transactional processing. adLIMS is natively extensible to cope with laboratory automation through platform-dependent API interfaces, and could be extended to genomic facilities due to the ERP functionalities.

  12. An Open Source Extensible Smart Energy Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rankin, Linda

    Aggregated distributed energy resources are the subject of much interest in the energy industry and are expected to play an important role in meeting our future energy needs by changing how we use, distribute and generate electricity. This energy future includes an increased amount of energy from renewable resources, load management techniques to improve resiliency and reliability, and distributed energy storage and generation capabilities that can be managed to meet the needs of the grid as well as individual customers. These energy assets are commonly referred to as Distributed Energy Resources (DER). DERs rely on a means to communicate informationmore » between an energy provider and multitudes of devices. Today DER control systems are typically vendor-specific, using custom hardware and software solutions. As a result, customers are locked into communication transport protocols, applications, tools, and data formats. Today’s systems are often difficult to extend to meet new application requirements, resulting in stranded assets when business requirements or energy management models evolve. By partnering with industry advisors and researchers, an implementation DER research platform was developed called the Smart Energy Framework (SEF). The hypothesis of this research was that an open source Internet of Things (IoT) framework could play a role in creating a commodity-based eco-system for DER assets that would reduce costs and provide interoperable products. SEF is based on the AllJoynTM IoT open source framework. The demonstration system incorporated DER assets, specifically batteries and smart water heaters. To verify the behavior of the distributed system, models of water heaters and batteries were also developed. An IoT interface for communicating between the assets and a control server was defined. This interface supports a series of “events” and telemetry reporting, similar to those defined by current smart grid communication standards. The results of this effort demonstrated the feasibility and application potential of using IoT frameworks for the creation of commodity-based DER systems. All of the identified commodity-based system requirements were met by the AllJoyn framework. By having commodity solutions, small vendors can enter the market and the cost of implementation for all parties is reduced. Utilities and aggregators can choose from multiple interoperable products reducing the risk of stranded assets. Based on this research it is recommended that interfaces based on existing smart grid communication protocol standards be created for these emerging IoT frameworks. These interfaces should be standardized as part of the IoT framework allowing for interoperability testing and certification. Similarly, IoT frameworks are introducing application level security. This type of security is needed for protecting application and platforms and will be important moving forward. Recommendations are that along with DER-based data model interfaces, platform and application security requirements also be prescribed when IoT devices support DER applications.« less

  13. Software components for medical image visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.

    2001-05-01

    Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.

  14. MyOcean Information System : achievements and perspectives

    NASA Astrophysics Data System (ADS)

    Loubrieu, T.; Dorandeu, J.; Claverie, V.; Cordier, K.; Barzic, Y.; Lauret, O.; Jolibois, T.; Blower, J.

    2012-04-01

    MyOcean system (http://www.myocean.eu) objective is to provide a Core Service for the Ocean. This means MyOcean is setting up an operational service for forecasts, analysis and expertise on ocean currents, temperature, salinity, sea level, primary ecosystems and ice coverage. The production of observation and forecasting data is distributed through 12 production centres. The interface with the external users (including web portal) and the coordination of the overall service is managed by a component called service desk. Besides, a transverse component called MIS (myOcean Information System) aims at connecting the production centres and service desk together, manage the shared information for the overall system and implement a standard Inspire interface for the external world. 2012 is a key year for the system. The MyOcean, 3-year project, which has set up the first versions of the system is ending. The MyOcean II, 2-year project, which will upgrade and consolidate the system is starting. Both projects are granted by the European commission within the GMES Program (7th Framework Program). At the end of the MyOcean project, the system has been designed and the 2 first versions have been implemented. The system now offers an integrated service composed with 237 ocean products. The ocean products are homogeneously described in a catalogue. They can be visualized and downloaded by the user (identified with a unique login) through a seamless web interface. The discovery and viewing interfaces are INSPIRE compliant. The data production, subsystems availability and audience are continuously monitored. The presentation will detail the implemented information system architecture and the chosen software solutions. Regarding the information system, MyOcean II is mainly aiming at consolidating the existing functions and promoting the operations cost-effectiveness. In addition, a specific effort will be done so that the less common data features of the system (ocean in-situ observations, remote-sensing along track observations) reach the same level of interoperability for view and download function as the gridded features. The presentation will detail the envisioned plans.

  15. Information Architecture for Interactive Archives at the Community Coordianted Modeling Center

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Wiegand, C.; Kuznetsova, M.; Mullinix, R.; Boblitt, J. M.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) is upgrading its meta-data system for model simulations to be compliant with the SPASE meta-data standard. This work is helping to enhance the SPASE standards for simulations to better describe the wide variety of models and their output. It will enable much more sophisticated and automated metrics and validation efforts at the CCMC, as well as much more robust searches for specific types of output. The new meta-data will also allow much more tailored run submissions as it will allow some code options to be selected for Run-On-Request models. We will also demonstrate data accessibility through an implementation of the Heliophysics Application Programmer's Interface (HAPI) protocol of data otherwise available throught the integrated space weather analysis system (iSWA).

  16. GAMBIT: the global and modular beyond-the-standard-model inference tool

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian

    2017-11-01

    We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org.

  17. Open environments to support systems engineering tool integration: A study using the Portable Common Tool Environment (PCTE)

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Jipping, Michael J.; Wild, Chris J.; Zeil, Steven J.; Roberts, Cathy C.

    1993-01-01

    A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. The project and the features of PCTE used are described, experience with the use of Emeraude environment over the project time frame is summarized, and several related areas for future research are summarized.

  18. Description of the U.S. Geological Survey Geo Data Portal data integration framework

    USGS Publications Warehouse

    Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Lucido, Jessica M.

    2012-01-01

    The U.S. Geological Survey has developed an open-standard data integration framework for working efficiently and effectively with large collections of climate and other geoscience data. A web interface accesses catalog datasets to find data services. Data resources can then be rendered for mapping and dataset metadata are derived directly from these web services. Algorithm configuration and information needed to retrieve data for processing are passed to a server where all large-volume data access and manipulation takes place. The data integration strategy described here was implemented by leveraging existing free and open source software. Details of the software used are omitted; rather, emphasis is placed on how open-standard web services and data encodings can be used in an architecture that integrates common geographic and atmospheric data.

  19. Status report: Implementation of gas measurements at the MAMS 14C AMS facility in Mannheim, Germany

    NASA Astrophysics Data System (ADS)

    Hoffmann, Helene; Friedrich, Ronny; Kromer, Bernd; Fahrni, Simon

    2017-11-01

    By implementing a Gas Interface System (GIS), CO2 gas measurements for radiocarbon dating of small environmental samples (<100 μgC) have been established at the MICADAS (Mini Carbon Dating System) AMS instrument in Mannheim, Germany. The system performance has been optimized and tested with respect to stability and ion yield by repeated blank and standard measurements for sample sizes down to 3 μgC. The highest 12C- low-energy (LE) ion currents, typically reaching 8-15 μA, could be achieved for a mixing ratio of 4% CO2 in Helium, resulting in relative counting errors of 1-2% for samples larger than 10 μgC and 3-7% for sample sizes below 10 μgC. The average count rate was ca. 500 counts per microgram C for OxII standard material. The blank is on the order of 35,000-40,000 radiocarbon years, which is comparable to similar systems. The complete setup thus enables reliable dating for most environmental samples (>3 μgC).

  20. Implementing the space shuttle data processing system with the space generic open avionics architecture

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    This paper presents an overview of the application of the Space Generic Open Avionics Architecture (SGOAA) to the Space Shuttle Data Processing System (DPS) architecture design. This application has been performed to validate the SGOAA, and its potential use in flight critical systems. The paper summarizes key elements of the Space Shuttle avionics architecture, data processing system requirements and software architecture as currently implemented. It then summarizes the SGOAA architecture and describes a tailoring of the SGOAA to the Space Shuttle. The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing external and internal hardware architecture, a six class model of interfaces and functional subsystem architectures for data services and operations control capabilities. It has been proposed as an avionics architecture standard with the National Aeronautics and Space Administration (NASA), through its Strategic Avionics Technology Working Group, and is being considered by the Society of Aeronautic Engineers (SAE) as an SAE Avionics Standard. This architecture was developed for the Flight Data Systems Division of JSC by the Lockheed Engineering and Sciences Company, Houston, Texas.

  1. Economics of automation for the design-to-mask interface

    NASA Astrophysics Data System (ADS)

    Erck, Wesley

    2009-04-01

    Mask order automation has increased steadily over the years through a variety of individual mask customer implementations. These have been supported by customer-specific software at the mask suppliers to support the variety of customer output formats. Some customers use the SEMI P10 1 standard, some use supplier-specific formats, and some use customer-specific formats. Some customers use little automation and depend instead on close customer-supplier relationships. Implementations are varied in quality and effectiveness. A major factor which has prolonged the adoption of more advanced and effective solutions has been a lack of understanding of the economic benefits. Some customers think standardized automation mainly benefits the mask supplier in order entry automation, but this ignores a number of other significant benefits which differ dramatically for each party in the supply chain. This paper discusses the nature of those differing advantages and presents simple models suited to four business cases: integrated device manufacturers (IDM), fabless companies, foundries and mask suppliers. Examples and estimates of the financial advantages for these business types will be shown.

  2. Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.

  3. DC Voltage Interface Standards for Naval Applications

    DTIC Science & Technology

    2015-06-24

    norbert.doerry@navy.mil Dr. John Amy Naval Sea Systems Command United States Navy Washington DC , USA john.amy@navy.mil Abstract—. MIL-STD-1399...standards have been established for DC interfaces on U.S. naval surface ships. This paper provides recommendations for specific standard DC

  4. Technologies and practices for maintaining and publishing earth science vocabularies

    NASA Astrophysics Data System (ADS)

    Cox, Simon; Yu, Jonathan; Williams, Megan; Giabardo, Fabrizio; Lowe, Dominic

    2015-04-01

    Shared vocabularies are a key element in geoscience data interoperability. Many organizations curate vocabularies, with most Geologic Surveys having a long history of development of lexicons and authority tables. However, their mode of publication is heterogeneous, ranging from PDFs and HTML web pages, spreadsheets and CSV, through various user-interfaces, and public and private APIs. Content maintenance ranges from tightly-governed and externally opaque, through various community processes, all the way to crowd-sourcing ('folksonomies'). Meanwhile, there is an increasing expectation of greater harmonization and vocabulary re-use, which create requirements for standardized content formalization and APIs, along with transparent content maintenance and versioning. We have been trialling a combination of processes and software dealing with vocabulary formalization, registration, search and linking. We use the Simplified Knowledge Organization System (SKOS) to provide a generic interface to content. SKOS is an RDF technology for multi-lingual, hierarchical vocabularies, oriented around 'concepts' denoted by URIs, and thus consistent with Linked Open Data. SKOS may be mixed in with classes and properties from specialized ontologies which provide a more specific interface when required. We have developed a suite of practices and techniques for conversion of content from the source technologies and styles into SKOS, largely based on spreadsheet manipulation before RDF conversion, and SPARQL afterwards. The workflow for each vocabulary must be adapted to match the specific inputs. In linked data applications, two requirements are paramount for user confidence: (i) the URI that denotes a vocabulary item is persistent, and should be dereferenceable indefinitely; (ii) the history and status of the resource denoted by a URI must be available. This is implemented by the Linked Data Registry (LDR), originally developed for the World Meteorological Organization and the UK Environment Agency, and now adapted and enhanced for deployment by CSIRO and the Australian Bureau of Meteorology. The LDR applies a standard content registration paradigm to RDF data, also including a delegation mode that enables a system to register (endorse) externally managed content. The locally managed RDF is exposed on a SPARQL endpoint. The registry implementation enables a flexible interaction pattern to support various specific content publication workflows, with the key feature of making the content externally accessible through a standard interface alongside its history, previous versions, and status. SPARQL is the standard low-level API for RDF including SKOS. On top of this we have developed SISSvoc, a SKOS-based RESTful API. This has been used it to deploy a number of vocabularies on behalf of the IUGS, ICS, NERC, OGC, the Australian Government, and CSIRO projects. Applications like SISSvoc Search provide a simple search UI on top of one or more SISSvoc sources. Together, these components provide a powerful and flexible system for providing earth science vocabularies for the community, consistent with semantic web and linked-data principles.

  5. Using the ACR/NEMA standard with TCP/IP and Ethernet

    NASA Astrophysics Data System (ADS)

    Chimiak, William J.; Williams, Rodney C.

    1991-07-01

    There is a need for a consolidated picture archival and communications system (PACS) in hospitals. At the Bowman Gray School of Medicine of Wake Forest University (BGSM), the authors are enhancing the ACR/NEMA Version 2 protocol using UNIX sockets and TCP/IP to greatly improve connectivity. Initially, nuclear medicine studies using gamma cameras are to be sent to PACS. The ACR/NEMA Version 2 protocol provides the functionality of the upper three layers of the open system interconnection (OSI) model in this implementation. The images, imaging equipment information, and patient information are then sent in ACR/NEMA format to a software socket. From there it is handed to the TCP/IP protocol, which provides the transport and network service. TCP/IP, in turn, uses the services of IEEE 802.3 (Ethernet) to complete the connectivity. The advantage of this implementation is threefold: (1) Only one I/O port is consumed by numerous nuclear medicine cameras, instead of a physical port for each camera. (2) Standard protocols are used which maximize interoperability with ACR/NEMA compliant PACSs. (3) The use of sockets allows a migration path to the transport and networking services of OSIs TP4 and connectionless network service as well as the high-performance protocol being considered by the American National Standards Institute (ANSI) and the International Standards Organization (ISO) -- the Xpress Transfer Protocol (XTP). The use of sockets also gives access to ANSI's Fiber Distributed Data Interface (FDDI) as well as other high-speed network standards.

  6. Keynote Address: ACR-NEMA standards and their implications for teleradiology

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.

    1990-06-01

    The ACR-NEMA Standard was developed initially as an interface standard for the interconnection of two pieces of imaging equipment Essentially the Standard defmes a point-to-point hardware connection with the necessary protocol and data structure so that two differing devices which meet the specification will be able to communicate with each other. The Standard does not defme a particular PACS architecture nor does it specify a database structure. In part these are the reasons why implementers have had difficulty in using the Standard in a full PACS. Recent activity of the Working Groups formed by the Committee overseeing work on the ACR-NEMA Standard has changed some of the " flavor" of the Standard. It was realized that connection of PACS with hospital and radiology information systems (HIS and RIS) is necessary if a PACS is ever to be succesful. The idea of interconnecting heterogeneous computer systems has pushed Standards development beyond the scope of the original work. Teleradiology which inherenfly involves wide-area networking may be a direct beneficiary of the new directions taken by the Standards Working Groups. This paper will give a brief history of the ACR-NEMA effort describe the " parent" Standard and its " offspring" and describe the activity of the current Working Groups with particular emphasis on the potential impacts on teleradiology.

  7. An object oriented implementation of the Yeadon human inertia model

    PubMed Central

    Dembia, Christopher; Moore, Jason K.; Hubbard, Mont

    2015-01-01

    We present an open source software implementation of a popular mathematical method developed by M.R. Yeadon for calculating the body and segment inertia parameters of a human body. The software is written in a high level open source language and provides three interfaces for manipulating the data and the model: a Python API, a command-line user interface, and a graphical user interface. Thus the software can fit into various data processing pipelines and requires only simple geometrical measures as input. PMID:25717365

  8. An object oriented implementation of the Yeadon human inertia model.

    PubMed

    Dembia, Christopher; Moore, Jason K; Hubbard, Mont

    2014-01-01

    We present an open source software implementation of a popular mathematical method developed by M.R. Yeadon for calculating the body and segment inertia parameters of a human body. The software is written in a high level open source language and provides three interfaces for manipulating the data and the model: a Python API, a command-line user interface, and a graphical user interface. Thus the software can fit into various data processing pipelines and requires only simple geometrical measures as input.

  9. A Natural Language Intelligent Tutoring System for Training Pathologists - Implementation and Evaluation

    PubMed Central

    El Saadawi, Gilan M.; Tseytlin, Eugene; Legowski, Elizabeth; Jukic, Drazen; Castine, Melissa; Fine, Jeffrey; Gormley, Robert; Crowley, Rebecca S.

    2009-01-01

    Introduction We developed and evaluated a Natural Language Interface (NLI) for an Intelligent Tutoring System (ITS) in Diagnostic Pathology. The system teaches residents to examine pathologic slides and write accurate pathology reports while providing immediate feedback on errors they make in their slide review and diagnostic reports. Residents can ask for help at any point in the case, and will receive context-specific feedback. Research Questions We evaluated (1) the performance of our natural language system, (2) the effect of the system on learning (3) the effect of feedback timing on learning gains and (4) the effect of ReportTutor on performance to self-assessment correlations. Methods The study uses a crossover 2×2 factorial design. We recruited 20 subjects from 4 academic programs. Subjects were randomly assigned to one of the four conditions - two conditions for the immediate interface, and two for the delayed interface. An expert dermatopathologist created a reference standard and 2 board certified AP/CP pathology fellows manually coded the residents' assessment reports. Subjects were given the opportunity to self grade their performance and we used a survey to determine student response to both interfaces. Results Our results show a highly significant improvement in report writing after one tutoring session with 4-fold increase in the learning gains with both interfaces but no effect of feedback timing on performance gains. Residents who used the immediate feedback interface first experienced a feature learning gain that is correlated with the number of cases they viewed. There was no correlation between performance and self-assessment in either condition. PMID:17934789

  10. A natural language intelligent tutoring system for training pathologists: implementation and evaluation.

    PubMed

    El Saadawi, Gilan M; Tseytlin, Eugene; Legowski, Elizabeth; Jukic, Drazen; Castine, Melissa; Fine, Jeffrey; Gormley, Robert; Crowley, Rebecca S

    2008-12-01

    We developed and evaluated a Natural Language Interface (NLI) for an Intelligent Tutoring System (ITS) in Diagnostic Pathology. The system teaches residents to examine pathologic slides and write accurate pathology reports while providing immediate feedback on errors they make in their slide review and diagnostic reports. Residents can ask for help at any point in the case, and will receive context-specific feedback. We evaluated (1) the performance of our natural language system, (2) the effect of the system on learning (3) the effect of feedback timing on learning gains and (4) the effect of ReportTutor on performance to self-assessment correlations. The study uses a crossover 2 x 2 factorial design. We recruited 20 subjects from 4 academic programs. Subjects were randomly assigned to one of the four conditions--two conditions for the immediate interface, and two for the delayed interface. An expert dermatopathologist created a reference standard and 2 board certified AP/CP pathology fellows manually coded the residents' assessment reports. Subjects were given the opportunity to self grade their performance and we used a survey to determine student response to both interfaces. Our results show a highly significant improvement in report writing after one tutoring session with 4-fold increase in the learning gains with both interfaces but no effect of feedback timing on performance gains. Residents who used the immediate feedback interface first experienced a feature learning gain that is correlated with the number of cases they viewed. There was no correlation between performance and self-assessment in either condition.

  11. Development of Web GIS for complex processing and visualization of climate geospatial datasets as an integral part of dedicated Virtual Research Environment

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander

    2017-04-01

    For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. The specialized Web GIS client contains three basic tires: • Tier of NetCDF metadata in JSON format • Middleware tier of JavaScript objects implementing methods to work with: o NetCDF metadata o XML file of selected calculations configuration (XML task) o WMS/WFS/WPS cartographical services • Graphical user interface tier representing JavaScript objects realizing general application business logic Web-GIS developed provides computational processing services launching to support solving tasks in the area of environmental monitoring, as well as presenting calculation results in the form of WMS/WFS cartographical layers in raster (PNG, JPG, GeoTIFF), vector (KML, GML, Shape), and binary (NetCDF) formats. It has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical formats. The work is supported by the Russian Science Foundation grant No 16-19-10257.

  12. A proposed application programming interface for a physical volume repository

    NASA Technical Reports Server (NTRS)

    Jones, Merritt; Williams, Joel; Wrenn, Richard

    1996-01-01

    The IEEE Storage System Standards Working Group (SSSWG) has developed the Reference Model for Open Storage Systems Interconnection, Mass Storage System Reference Model Version 5. This document, provides the framework for a series of standards for application and user interfaces to open storage systems. More recently, the SSSWG has been developing Application Programming Interfaces (APIs) for the individual components defined by the model. The API for the Physical Volume Repository is the most fully developed, but work is being done on APIs for the Physical Volume Library and for the Mover also. The SSSWG meets every other month, and meetings are open to all interested parties. The Physical Volume Repository (PVR) is responsible for managing the storage of removable media cartridges and for mounting and dismounting these cartridges onto drives. This document describes a model which defines a Physical Volume Repository, and gives a brief summary of the Application Programming Interface (API) which the IEEE Storage Systems Standards Working Group (SSSWG) is proposing as the standard interface for the PVR.

  13. SUMC fault tolerant computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.

  14. A Human Factors Perspective on Alarm System Research and Development 2000 to 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curt Braun; John Grimes; Eric Shaver

    By definition, alarms serve to notify human operators of out-of-parameter conditions that could threaten equipment, the environment, product quality and, of course, human life. Given the complexities of industrial systems, human machine interfaces, and the human operator, the understanding of how alarms and humans can best work together to prevent disaster is continually developing. This review examines advances in alarm research and development from 2000 to 2010 and includes the writings of trade professionals, engineering and human factors researchers, and standards organizations with the goal of documenting advances in alarms system design, research, and implementation.

  15. Label-free evanescent microscopy for membrane nano-tomography in living cells.

    PubMed

    Bon, Pierre; Barroca, Thomas; Lévèque-Fort, Sandrine; Fort, Emmanuel

    2014-11-01

    We show that through-the-objective evanescent microscopy (epi-EM) is a powerful technique to image membranes in living cells. Readily implementable on a standard inverted microscope, this technique enables full-field and real-time tracking of membrane processes without labeling and thus signal fading. In addition, we demonstrate that the membrane/interface distance can be retrieved with 10 nm precision using a multilayer Fresnel model. We apply this nano-axial tomography of living cell membranes to retrieve quantitative information on membrane invagination dynamics. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  16. Graphical User Interface Programming in Introductory Computer Science.

    ERIC Educational Resources Information Center

    Skolnick, Michael M.; Spooner, David L.

    Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…

  17. Digital hand atlas and computer-aided bone age assessment via the Web

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente

    1999-07-01

    A frequently used assessment method of bone age is atlas matching by a radiological examination of a hand image against a reference set of atlas patterns of normal standards. We are in a process of developing a digital hand atlas with a large standard set of normal hand and wrist images that reflect the skeletal maturity, race and sex difference, and current child development. The digital hand atlas will be used for a computer-aided bone age assessment via Web. We have designed and partially implemented a computer-aided diagnostic (CAD) system for Web-based bone age assessment. The system consists of a digital hand atlas, a relational image database and a Web-based user interface. The digital atlas is based on a large standard set of normal hand an wrist images with extracted bone objects and quantitative features. The image database uses a content- based indexing to organize the hand images and their attributes and present to users in a structured way. The Web-based user interface allows users to interact with the hand image database from browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, will be extracted and compared with patterns from the atlas database to assess the bone age. The relevant reference imags and the final assessment report will be sent back to the user's browser via Web. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. In this paper, we present the system design and Web-based client-server model for computer-assisted bone age assessment and our initial implementation of the digital atlas database.

  18. A Hardware-in-the-Loop Simulation Platform for the Verification and Validation of Safety Control Systems

    NASA Astrophysics Data System (ADS)

    Rankin, Drew J.; Jiang, Jin

    2011-04-01

    Verification and validation (V&V) of safety control system quality and performance is required prior to installing control system hardware within nuclear power plants (NPPs). Thus, the objective of the hardware-in-the-loop (HIL) platform introduced in this paper is to verify the functionality of these safety control systems. The developed platform provides a flexible simulated testing environment which enables synchronized coupling between the real and simulated world. Within the platform, National Instruments (NI) data acquisition (DAQ) hardware provides an interface between a programmable electronic system under test (SUT) and a simulation computer. Further, NI LabVIEW resides on this remote DAQ workstation for signal conversion and routing between Ethernet and standard industrial signals as well as for user interface. The platform is applied to the testing of a simplified implementation of Canadian Deuterium Uranium (CANDU) shutdown system no. 1 (SDS1) which monitors only the steam generator level of the simulated NPP. CANDU NPP simulation is performed on a Darlington NPP desktop training simulator provided by Ontario Power Generation (OPG). Simplified SDS1 logic is implemented on an Invensys Tricon v9 programmable logic controller (PLC) to test the performance of both the safety controller and the implemented logic. Prior to HIL simulation, platform availability of over 95% is achieved for the configuration used during the V&V of the PLC. Comparison of HIL simulation results to benchmark simulations shows good operational performance of the PLC following a postulated initiating event (PIE).

  19. International Docking Standard (IDSS) Interface Definition Document (IDD) . E; Revision

    NASA Technical Reports Server (NTRS)

    Kelly, Sean M.; Cryan, Scott P.

    2016-01-01

    This International Docking System Standard (IDSS) Interface Definition Document (IDD) is the result of a collaboration by the International Space Station membership to establish a standard docking interface to enable on-orbit crew rescue operations and joint collaborative endeavors utilizing different spacecraft. This IDSS IDD details the physical geometric mating interface and design loads requirements. The physical geometric interface requirements must be strictly followed to ensure physical spacecraft mating compatibility. This includes both defined components and areas that are void of components. The IDD also identifies common design parameters as identified in section 3.0, e.g., docking initial conditions and vehicle mass properties. This information represents a recommended set of design values enveloping a broad set of design reference missions and conditions, which if accommodated in the docking system design, increases the probability of successful docking between different spacecraft. This IDD does not address operational procedures or off-nominal situations, nor does it dictate implementation or design features behind the mating interface. It is the responsibility of the spacecraft developer to perform all hardware verification and validation, and to perform final docking analyses to ensure the needed docking performance and to develop the final certification loads for their application. While there are many other critical requirements needed in the development of a docking system such as fault tolerance, reliability, and environments (e.g. vibration, etc.), it is not the intent of the IDSS IDD to mandate all of these requirements; these requirements must be addressed as part of the specific developer's unique program, spacecraft and mission needs. This approach allows designers the flexibility to design and build docking mechanisms to their unique program needs and requirements. The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions.The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions. The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions.

  20. Implementation of a virtual laboratory for training on sound insulation testing and uncertainty calculations in acoustic tests.

    PubMed

    Asensio, C; Gasco, L; Ruiz, M; Recuero, M

    2015-02-01

    This paper describes a methodology and case study for the implementation of educational virtual laboratories for practice training on acoustic tests according to international standards. The objectives of this activity are (a) to help the students understand and apply the procedures described in the standards and (b) to familiarize the students with the uncertainty in measurement and its estimation in acoustics. The virtual laboratory will not focus on the handling and set-up of real acoustic equipment but rather on procedures and uncertainty. The case study focuses on the application of the virtual laboratory for facade sound insulation tests according to ISO 140-5:1998 (International Organization for Standardization, Geneva, Switzerland, 1998), and the paper describes the causal and stochastic models and the constraints applied in the virtual environment under consideration. With a simple user interface, the laboratory will provide measurement data that the students will have to process to report the insulation results that must converge with the "virtual true values" in the laboratory. The main advantage of the virtual laboratory is derived from the customization of factors in which the student will be instructed or examined (for instance, background noise correction, the detection of sporadic corrupted observations, and the effect of instrument precision).

  1. Process control systems: integrated for future process technologies

    NASA Astrophysics Data System (ADS)

    Botros, Youssry; Hajj, Hazem M.

    2003-06-01

    Process Control Systems (PCS) are becoming more crucial to the success of Integrated Circuit makers due to their direct impact on product quality, cost, and Fab output. The primary objective of PCS is to minimize variability by detecting and correcting non optimal performance. Current PCS implementations are considered disparate, where each PCS application is designed, deployed and supported separately. Each implementation targets a specific area of control such as equipment performance, wafer manufacturing, and process health monitoring. With Intel entering the nanometer technology era, tighter process specifications are required for higher yields and lower cost. This requires areas of control to be tightly coupled and integrated to achieve the optimal performance. This requirement can be achieved via consistent design and deployment of the integrated PCS. PCS integration will result in several benefits such as leveraging commonalities, avoiding redundancy, and facilitating sharing between implementations. This paper will address PCS implementations and focus on benefits and requirements of the integrated PCS. Intel integrated PCS Architecture will be then presented and its components will be briefly discussed. Finally, industry direction and efforts to standardize PCS interfaces that enable PCS integration will be presented.

  2. Implementation of neuromorphic systems: from discrete components to analog VLSI chips (testing and communication issues).

    PubMed

    Dante, V; Del Giudice, P; Mattia, M

    2001-01-01

    We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a special purpose, asynchronous bus for communication among neuromorphic chips; a short and preliminary account of an application-oriented device, taking advantage of the above communication infrastructure.

  3. Network and user interface for PAT DOME virtual motion environment system

    NASA Technical Reports Server (NTRS)

    Worthington, J. W.; Duncan, K. M.; Crosier, W. G.

    1993-01-01

    The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) provides astronauts a virtual microgravity sensory environment designed to help alleviate tye symptoms of space motion sickness (SMS). The system consists of four microcomputers networked to provide real time control, and an image generator (IG) driving a wide angle video display inside a dome structure. The spherical display demands distortion correction. The system is currently being modified with a new graphical user interface (GUI) and a new Silicon Graphics IG. This paper will concentrate on the new GUI and the networking scheme. The new GUI eliminates proprietary graphics hardware and software, and instead makes use of standard and low cost PC video (CGA) and off the shelf software (Microsoft's Quick C). Mouse selection for user input is supported. The new Silicon Graphics IG requires an Ethernet interface. The microcomputer known as the Real Time Controller (RTC), which has overall control of the system and is written in Ada, was modified to use the free public domain NCSA Telnet software for Ethernet communications with the Silicon Graphics IG. The RTC also maintains the original ARCNET communications through Novell Netware IPX with the rest of the system. The Telnet TCP/IP protocol was first used for real-time communication, but because of buffering problems the Telnet datagram (UDP) protocol needed to be implemented. Since the Telnet modules are written in C, the Adap pragma 'Interface' was used to interface with the network calls.

  4. A Multi-Center Space Data System Prototype Based on CCSDS Standards

    NASA Technical Reports Server (NTRS)

    Rich, Thomas M.

    2016-01-01

    Deep space missions beyond earth orbit will require new methods of data communications in order to compensate for increasing Radio Frequency (RF) propagation delay. The Consultative Committee for Space Data Systems (CCSDS) standard protocols Spacecraft Monitor & Control (SM&C), Asynchronous Message Service (AMS), and Delay/Disruption Tolerant Networking (DTN) provide such a method. However, the maturity level of this protocol stack is insufficient for mission inclusion at this time. This Space Data System prototype is intended to provide experience which will raise the Technical Readiness Level (TRL) of this protocol set. In order to reduce costs, future missions can take advantage of these standard protocols, which will result in increased interoperability between control centers. This prototype demonstrates these capabilities by implementing a realistic space data system in which telemetry is published to control center applications at the Jet Propulsion Lab (JPL), the Marshall Space Flight Center (MSFC), and the Johnson Space Center (JSC). Reverse publishing paths for commanding from each control center are also implemented. The target vehicle consists of realistic flight computer hardware running Core Flight Software (CFS) in the integrated Power, Avionics, and Power (iPAS) Pathfinder Lab at JSC. This prototype demonstrates a potential upgrade path for future Deep Space Network (DSN) modification, in which the automatic error recovery and communication gap compensation capabilities of DTN would be exploited. In addition, SM&C provides architectural flexibility by allowing new service providers and consumers to be added efficiently anywhere in the network using the common interface provided by SM&C's Message Abstraction Layer (MAL). In FY 2015, this space data system was enhanced by adding telerobotic operations capability provided by the Robot API Delegate (RAPID) family of protocols developed at NASA. RAPID is one of several candidates for consideration and inclusion in a new international standard being developed by the CCSDS Telerobotic Operations Working Group. Software gateways for the purpose of interfacing RAPID messages with the existing SM&C based infrastructure were developed. Telerobotic monitor, control, and bridge applications were written in the RAPID framework, which were then tailored to the NAO telerobotic test article hardware, a product of Aldebaran Robotics.

  5. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    NASA Astrophysics Data System (ADS)

    Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.

    1995-03-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.

  6. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, S.; Zacharia, T.; Baltas, N.

    1995-04-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less

  7. Implementing QML for radiation hardness assurance

    NASA Astrophysics Data System (ADS)

    Winokur, P. S.; Sexton, F. W.; Fleetwood, D. M.; Terry, M. D.; Shaneyfelt, M. R.

    1990-12-01

    The US government has proposed a qualified manufacturers list (QML) methodology to qualify integrated circuits for high reliability and radiation hardness. An approach to implementing QML for single-event upset (SEU) immunity on 16k SRAMs that involves relating values of feedback resistance to system error rates is demonstrated. It is seen that the process capability indices, Cp and Cpk, for the manufacture of 400-k-ohm feedback resistors required to provide SEU tolerance do not conform to 6 sigma quality standards. For total-dose, interface trap charge, Delta Vit, shifts measured on transistors are correlated with circuit response in the space environment. Statistical process control (SPC) is illustrated for Delta Vit, and violations of SPC rules are interpreted in terms of continuous improvement. Design validation for SEU and quality conformance inspections for total-dose are identified as major obstacles to cost-effective QML implementation. Techniques and tools that will help QML provide real cost savings are identified as physical models, 3-D device-plus-circuit codes, and improved design simulators.

  8. Finite-element lattice Boltzmann simulations of contact line dynamics

    NASA Astrophysics Data System (ADS)

    Matin, Rastin; Krzysztof Misztal, Marek; Hernández-García, Anier; Mathiesen, Joachim

    2018-01-01

    The lattice Boltzmann method has become one of the standard techniques for simulating a wide range of fluid flows. However, the intrinsic coupling of momentum and space discretization restricts the traditional lattice Boltzmann method to regular lattices. Alternative off-lattice Boltzmann schemes exist for both single- and multiphase flows that decouple the velocity discretization from the underlying spatial grid. The current study extends the applicability of these off-lattice methods by introducing a finite element formulation that enables simulating contact line dynamics for partially wetting fluids. This work exemplifies the implementation of the scheme and furthermore presents benchmark experiments that show the scheme reduces spurious currents at the liquid-vapor interface by at least two orders of magnitude compared to a nodal implementation and allows for predicting the equilibrium states accurately in the range of moderate contact angles.

  9. SKIRT: Hybrid parallelization of radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.

    2017-07-01

    We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.

  10. Data transmission protocol for Pi-of-the-Sky cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, J.; Kasprowicz, G.; Mankiewicz, M.; Nawrocki, K.; Sitek, P.; Sokolowski, M.; Sulej, R.; Tlaczala, W.

    2006-10-01

    The large amount of data collected by the automatic astronomical cameras has to be transferred to the fast computers in a reliable way. The method chosen should ensure data streaming in both directions but in nonsymmetrical way. The Ethernet interface is very good choice because of its popularity and proven performance. However it requires TCP/IP stack implementation in devices like cameras for full compliance with existing network and operating systems. This paper describes NUDP protocol, which was made as supplement to standard UDP protocol and can be used as a simple-network protocol. The NUDP does not need TCP protocol implementation and makes it possible to run the Ethernet network with simple devices based on microcontroller and/or FPGA chips. The data transmission idea was created especially for the "Pi of the Sky" project.

  11. Chapter 3. Coordination and collaboration with interface units

    PubMed Central

    Joynt, Gavin M.; Loo, Shi; Taylor, Bruce L.; Margalit, Gila; Christian, Michael D.; Sandrock, Christian; Danis, Marion; Leoniv, Yuval

    2016-01-01

    Purpose To provide recommendations and standard operating procedures (SOPs) for intensive care unit (ICU) and hospital preparations for an influenza pandemic or mass disaster with a specific focus on enhancing coordination and collaboration between the ICU and other key stakeholders. Methods Based on a literature review and expert opinion, a Delphi process was used to define the essential topics including coordination and collaboration. Results Key recommendations include: (1) establish an Incident Management System with Emergency Executive Control Groups at facility, local, regional/state or national levels to exercise authority and direction over resource use and communications; (2) develop a system of communication, coordination and collaboration between the ICU and key interface departments within the hospital; (3) identify key functions or processes requiring coordination and collaboration, the most important of these being manpower and resources utilization (surge capacity) and re-allocation of personnel, equipment and physical space; (4) develop processes to allow smooth inter-departmental patient transfers; (5) creating systems and guidelines is not sufficient, it is important to: (a) identify the roles and responsibilities of key individuals necessary for the implementation of the guidelines; (b) ensure that these individuals are adequately trained and prepared to perform their roles; (c) ensure adequate equipment to allow key coordination and collaboration activities; (d) ensure an adequate physical environment to allow staff to properly implement guidelines; (6) trigger events for determining a crisis should be defined. Conclusions Judicious planning and adoption of protocols for coordination and collaboration with interface units are necessary to optimize outcomes during a pandemic. PMID:20213418

  12. Eodataservice.org: Big Data Platform to Enable Multi-disciplinary Information Extraction from Geospatial Data

    NASA Astrophysics Data System (ADS)

    Natali, S.; Mantovani, S.; Barboni, D.; Hogan, P.

    2017-12-01

    In 1999, US Vice-President Al Gore outlined the concept of `Digital Earth' as a multi-resolution, three-dimensional representation of the planet to find, visualise and make sense of vast amounts of geo- referenced information on physical and social environments, allowing to navigate through space and time, accessing historical and forecast data to support scientists, policy-makers, and any other user. The eodataservice platform (http://eodataservice.org/) implements the Digital Earth Concept: eodatasevice is a cross-domain platform that makes available a large set of multi-year global environmental collections allowing data discovery, visualization, combination, processing and download. It implements a "virtual datacube" approach where data stored on distributed data centers are made available via standardized OGC-compliant interfaces. Dedicated web-based Graphic User Interfaces (based on the ESA-NASA WebWorldWind technology) as well as web-based notebooks (e.g. Jupyter notebook), deskop GIS tools and command line interfaces can be used to access and manipulate the data. The platform can be fully customized on users' needs. So far eodataservice has been used for the following thematic applications: High resolution satellite data distribution Land surface monitoring using SAR surface deformation data Atmosphere, ocean and climate applications Climate-health applications Urban Environment monitoring Safeguard of cultural heritage sites Support to farmers and (re)-insurances in the agriculturés field In the current work, the EO Data Service concept is presented as key enabling technology; furthermore various examples are provided to demonstrate the high level of interdisciplinarity of the platform.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, Bruno; Carvalho, Paulo F.; Rodrigues, A.P.

    The ATCA standard specifies a mandatory Shelf Manager (ShM) unit which is a key element for the system operation. It includes the Intelligent Platform Management Controller (IPMC) which monitors the system health, retrieves inventory information and controls the Field Replaceable Units (FRUs). These elements enable the intelligent health monitoring, providing high-availability and safety operation, ensuring the correct system operation. For critical systems like ones of tokamak ITER these features are mandatory to support the long pulse operation. The Nominal Device Support (NDS) was designed and developed for the ITER CODAC Core System (CCS), which will be the responsible for plantmore » Instrumentation and Control (I and C), supervising and monitoring on ITER. It generalizes the Enhanced Physics and Industrial Control System (EPICS) device support interface for Data Acquisition (DAQ) and timing devices. However the support for health management features and ATCA ShM are not yet provided. This paper presents the implementation and test of a NDS for the ATCA ShM, using the ITER Fast Plant System Controller (FPSC) prototype environment. This prototype is fully compatible with the ITER CCS and uses the EPICS Channel Access (CA) protocol as the interface with the Plant Operation Network (PON). The implemented solution running in an EPICS Input / Output Controller (IOC) provides Process Variables (PV) to the PON network with the system information. These PVs can be used for control and monitoring by all CA clients, such as EPICS user interface clients and alarm systems. The results are presented, demonstrating the fully integration and the usability of this solution. (authors)« less

  14. Development of a 3D WebGIS System for Retrieving and Visualizing CityGML Data Based on their Geometric and Semantic Characteristics by Using Free and Open Source Technology

    NASA Astrophysics Data System (ADS)

    Pispidikis, I.; Dimopoulou, E.

    2016-10-01

    CityGML is considered as an optimal standard for representing 3D city models. However, international experience has shown that visualization of the latter is quite difficult to be implemented on the web, due to the large size of data and the complexity of CityGML. As a result, in the context of this paper, a 3D WebGIS application is developed in order to successfully retrieve and visualize CityGML data in accordance with their respective geometric and semantic characteristics. Furthermore, the available web technologies and the architecture of WebGIS systems are investigated, as provided by international experience, in order to be utilized in the most appropriate way for the purposes of this paper. Specifically, a PostgreSQL/ PostGIS Database is used, in compliance with the 3DCityDB schema. At Server tier, Apache HTTP Server and GeoServer are utilized, while a Server Side programming language PHP is used. At Client tier, which implemented the interface of the application, the following technologies were used: JQuery, AJAX, JavaScript, HTML5, WebGL and Ol3-Cesium. Finally, it is worth mentioning that the application's primary objectives are a user-friendly interface and a fully open source development.

  15. Semantically-enabled sensor plug & play for the sensor web.

    PubMed

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC's Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research.

  16. Semantically-Enabled Sensor Plug & Play for the Sensor Web

    PubMed Central

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033

  17. OpenROCS: a software tool to control robotic observatories

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Sanz, Josep; Vilardell, Francesc; Ribas, Ignasi; Gil, Pere

    2012-09-01

    We present the Open Robotic Observatory Control System (OpenROCS), an open source software platform developed for the robotic control of telescopes. It acts as a software infrastructure that executes all the necessary processes to implement responses to the system events that appear in the routine and non-routine operations associated to data-flow and housekeeping control. The OpenROCS software design and implementation provides a high flexibility to be adapted to different observatory configurations and event-action specifications. It is based on an abstract model that is independent of the specific hardware or software and is highly configurable. Interfaces to the system components are defined in a simple manner to achieve this goal. We give a detailed description of the version 2.0 of this software, based on a modular architecture developed in PHP and XML configuration files, and using standard communication protocols to interface with applications for hardware monitoring and control, environment monitoring, scheduling of tasks, image processing and data quality control. We provide two examples of how it is used as the core element of the control system in two robotic observatories: the Joan Oró Telescope at the Montsec Astronomical Observatory (Catalonia, Spain) and the SuperWASP Qatar Telescope at the Roque de los Muchachos Observatory (Canary Islands, Spain).

  18. Overview of Graphical User Interfaces.

    ERIC Educational Resources Information Center

    Hulser, Richard P.

    1993-01-01

    Discussion of graphical user interfaces for online public access catalogs (OPACs) covers the history of OPACs; OPAC front-end design, including examples from Indiana University and the University of Illinois; and planning and implementation of a user interface. (10 references) (EA)

  19. The crew activity planning system bus interface unit

    NASA Technical Reports Server (NTRS)

    Allen, M. A.

    1979-01-01

    The hardware and software designs used to implement a high speed parallel communications interface to the MITRE 307.2 kilobit/second serial bus communications system are described. The primary topic is the development of the bus interface unit.

  20. Real World Data and Service Integration: Demonstrations and Lessons Learnt from the GEOSS Architecture Implementation Pilot Phase Four

    NASA Astrophysics Data System (ADS)

    Simonis, I.; Alameh, N.; Percivall, G.

    2012-04-01

    The GEOSS Architecture Implementation Pilots (AIP) develop and pilot new process and infrastructure components for the GEOSS Common Infrastructure (GCI) and the broader GEOSS architecture through an evolutionary development process consisting of a set of phases. Each phase addresses a set of Societal Benefit Areas (SBA) and geoinformatic topics. The first three phases consisted of architecture refinements based on interactions with users; component interoperability testing; and SBA-driven demonstrations. The fourth phase (AIP-4) documented here focused on fostering interoperability arrangements and common practices for GEOSS by facilitating access to priority earth observation data sources and by developing and testing specific clients and mediation components to enable such access. Additionally, AIP-4 supported the development of a thesaurus for earth observation parameters and tutorials to guide data providers to make their data available through GEOSS. The results of AIP-4 are documented in two engineering reports and captured in a series of videos posted online. Led by the Open Geospatial Consortium (OGC), AIP-4 built on contributions from over 60 organizations. This wide portfolio helped testing interoperability arrangements in a highly heterogeneous environment. AIP-4 participants cooperated closely to test available data sets, access services, and client applications in multiple workflows and set ups. Eventually, AIP-4 improved the accessibility of GEOSS datasets identified as supporting Critical Earth Observation Priorities by the GEO User Interface Committee (UIC), and increased the use of the data through promoting availability of new data services, clients, and applications. During AIP-4, A number of key earth observation data sources have been made available online at standard service interfaces, discovered using brokered search approaches, and processed and visualized in generalized client applications. AIP-4 demonstrated the level of interoperability that can be achieved using currently available standards and corresponding products and implementations. The AIP-4 integration testing process proved that the integration of heterogeneous data resources available via interoperability arrangements such as WMS, WFS, WCS and WPS indeed works. However, the integration often required various levels of customizations on the client side to accommodate for variations in the service implementations. Those variations seem to be based on both malfunctioning service implementations as well as varying interpretations of or inconsistencies in existing standards. Other interoperability issues identified revolve around missing metadata or using unrecognized identifiers in the description of GEOSS resources. Once such issues are resolved, continuous compliance testing is necessary to ensure minimizing variability of implementations. Once data providers can choose from a set of enhanced implementations for offering their data using consistent interoperability arrangements, the barrier to client and decision support implementation developers will be lowered, leading to true leveraging of earth observation data through GEOSS. AIP-4 results, lessons learnt from previous AIPs 1-3 and close coordination with the Infrastructure Implementation Board (IIB), the successor of the Architecture and Data Committee (ADC), form the basis in the current preparation phase for the next Architecture Implementation Pilot, AIP-5. The Call For Participation will be launched in February and the pilot will be conducted from May to November 2012. The current planning foresees a scenario- oriented approach, with possible scenarios coming from the domains of disaster management, health (including air quality and waterborne diseases), water resource observations, energy, biodiversity and climate change, and agriculture.

  1. Enabling Exploration Through Docking Standards

    NASA Technical Reports Server (NTRS)

    Hatfield, Caris A.

    2012-01-01

    Human exploration missions beyond low earth orbit will likely require international cooperation in order to leverage limited resources. International standards can help enable cooperative missions by providing well understood, predefined interfaces allowing compatibility between unique spacecraft and systems. The International Space Station (ISS) partnership has developed a publicly available International Docking System Standard (IDSS) that provides a solution to one of these key interfaces by defining a common docking interface. The docking interface provides a way for even dissimilar spacecraft to dock for exchange of crew and cargo, as well as enabling the assembly of large space systems. This paper provides an overview of the key attributes of the IDSS, an overview of the NASA Docking System (NDS), and the plans for updating the ISS with IDSS compatible interfaces. The NDS provides a state of the art, low impact docking system that will initially be made available to commercial crew and cargo providers. The ISS will be used to demonstrate the operational utility of the IDSS interface as a foundational technology for cooperative exploration.

  2. TangibleCubes — Implementation of Tangible User Interfaces through the Usage of Microcontroller and Sensor Technology

    NASA Astrophysics Data System (ADS)

    Setscheny, Stephan

    The interaction between human beings and technology builds a central aspect in human life. The most common form of this human-technology interface is the graphical user interface which is controlled through the mouse and the keyboard. In consequence of continuous miniaturization and the increasing performance of microcontrollers and sensors for the detection of human interactions, developers receive new possibilities for realising innovative interfaces. As far as this movement is concerned, the relevance of computers in the common sense and graphical user interfaces is decreasing. Especially in the area of ubiquitous computing and the interaction through tangible user interfaces a highly impact of this technical evolution can be seen. Apart from this, tangible and experience able interaction offers users the possibility of an interactive and intuitive method for controlling technical objects. The implementation of microcontrollers for control functions and sensors enables the realisation of these experience able interfaces. Besides the theories about tangible user interfaces, the consideration about sensors and the Arduino platform builds a main aspect of this work.

  3. A Novel Multilayer Correlation Maximization Model for Improving CCA-Based Frequency Recognition in SSVEP Brain-Computer Interface.

    PubMed

    Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu

    2018-05-01

    Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.

  4. Rapid prototyping of SoC-based real-time vision system: application to image preprocessing and face detection

    NASA Astrophysics Data System (ADS)

    Jridi, Maher; Alfalou, Ayman

    2017-05-01

    By this paper, the major goal is to investigate the Multi-CPU/FPGA SoC (System on Chip) design flow and to transfer a know-how and skills to rapidly design embedded real-time vision system. Our aim is to show how the use of these devices can be benefit for system level integration since they make possible simultaneous hardware and software development. We take the facial detection and pretreatments as case study since they have a great potential to be used in several applications such as video surveillance, building access control and criminal identification. The designed system use the Xilinx Zedboard platform. The last is the central element of the developed vision system. The video acquisition is performed using either standard webcam connected to the Zedboard via USB interface or several camera IP devices. The visualization of video content and intermediate results are possible with HDMI interface connected to HD display. The treatments embedded in the system are as follow: (i) pre-processing such as edge detection implemented in the ARM and in the reconfigurable logic, (ii) software implementation of motion detection and face detection using either ViolaJones or LBP (Local Binary Pattern), and (iii) application layer to select processing application and to display results in a web page. One uniquely interesting feature of the proposed system is that two functions have been developed to transmit data from and to the VDMA port. With the proposed optimization, the hardware implementation of the Sobel filter takes 27 ms and 76 ms for 640x480, and 720p resolutions, respectively. Hence, with the FPGA implementation, an acceleration of 5 times is obtained which allow the processing of 37 fps and 13 fps for 640x480, and 720p resolutions, respectively.

  5. TRU waste lead organization -- WIPP Project Office Interface Management semi-annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, J.V.; Gorton, J.M.

    1985-05-01

    The Charter establishing the Interface Control Board and the administrative organization to manage the interface of the TRU Waste Lead Organization and the WIPP Project Office also requires preparation of a summary report describing significant interface activities.'' This report includes a discussion of Interface Working Group (IWG) recommendations and resolutions considered and implemented'' over the reporting period October 1984 to March 1985.

  6. The Identification, Implementation, and Evaluation of Critical User Interface Design Features of Computer-Assisted Instruction Programs in Mathematics for Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Seo, You-Jin; Woo, Honguk

    2010-01-01

    Critical user interface design features of computer-assisted instruction programs in mathematics for students with learning disabilities and corresponding implementation guidelines were identified in this study. Based on the identified features and guidelines, a multimedia computer-assisted instruction program, "Math Explorer", which delivers…

  7. Project INTERFACE: Identification of Effective Implementation Strategies for Integrating Microcomputer Instruction into Ongoing Educational Services for the Handicapped. Final Report, 1984-86.

    ERIC Educational Resources Information Center

    Shaw, Estelle; And Others

    The monograph describes Project INTERFACE, a 2-year collaborative effort among the Board of Cooperative Educational Services (BOCES) of Nassau County (New York), Long Island University, and three local school districts. The project identified the "most effective" implementation strategies for integrating microcomputer instruction into…

  8. Efficient color display using low-absorption in-pixel color filters

    NASA Technical Reports Server (NTRS)

    Wang, Yu (Inventor)

    2000-01-01

    A display system having a non-absorbing and reflective color filtering array and a reflector to improve light utilization efficiency. One implementation of the color filtering array uses a surface plasmon filter having two symmetric metal-dielectric interfaces coupled with each other to produce a transmission optical wave at a surface plasmon resonance wavelength at one interface from a p-polarized input beam on the other interface. Another implementation of the color filtering array uses a metal-film interference filter having two dielectric layers and three metallic films.

  9. A Lifecycle Approach to Brokered Data Management for Hydrologic Modeling Data Using Open Standards.

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.; Booth, N.; Kunicki, T.; Walker, J.

    2012-12-01

    The U.S. Geological Survey Center for Integrated Data Analytics has formalized an information management-architecture to facilitate hydrologic modeling and subsequent decision support throughout a project's lifecycle. The architecture is based on open standards and open source software to decrease the adoption barrier and to build on existing, community supported software. The components of this system have been developed and evaluated to support data management activities of the interagency Great Lakes Restoration Initiative, Department of Interior's Climate Science Centers and WaterSmart National Water Census. Much of the research and development of this system has been in cooperation with international interoperability experiments conducted within the Open Geospatial Consortium. Community-developed standards and software, implemented to meet the unique requirements of specific disciplines, are used as a system of interoperable, discipline specific, data types and interfaces. This approach has allowed adoption of existing software that satisfies the majority of system requirements. Four major features of the system include: 1) assistance in model parameter and forcing creation from large enterprise data sources; 2) conversion of model results and calibrated parameters to standard formats, making them available via standard web services; 3) tracking a model's processes, inputs, and outputs as a cohesive metadata record, allowing provenance tracking via reference to web services; and 4) generalized decision support tools which rely on a suite of standard data types and interfaces, rather than particular manually curated model-derived datasets. Recent progress made in data and web service standards related to sensor and/or model derived station time series, dynamic web processing, and metadata management are central to this system's function and will be presented briefly along with a functional overview of the applications that make up the system. As the separate pieces of this system progress, they will be combined and generalized to form a sort of social network for nationally consistent hydrologic modeling.

  10. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.

  11. The cancer precision medicine knowledge base for structured clinical-grade mutations and interpretations.

    PubMed

    Huang, Linda; Fernandes, Helen; Zia, Hamid; Tavassoli, Peyman; Rennert, Hanna; Pisapia, David; Imielinski, Marcin; Sboner, Andrea; Rubin, Mark A; Kluk, Michael; Elemento, Olivier

    2017-05-01

    This paper describes the Precision Medicine Knowledge Base (PMKB; https://pmkb.weill.cornell.edu ), an interactive online application for collaborative editing, maintenance, and sharing of structured clinical-grade cancer mutation interpretations. PMKB was built using the Ruby on Rails Web application framework. Leveraging existing standards such as the Human Genome Variation Society variant description format, we implemented a data model that links variants to tumor-specific and tissue-specific interpretations. Key features of PMKB include support for all major variant types, standardized authentication, distinct user roles including high-level approvers, and detailed activity history. A REpresentational State Transfer (REST) application-programming interface (API) was implemented to query the PMKB programmatically. At the time of writing, PMKB contains 457 variant descriptions with 281 clinical-grade interpretations. The EGFR, BRAF, KRAS, and KIT genes are associated with the largest numbers of interpretable variants. PMKB's interpretations have been used in over 1500 AmpliSeq tests and 750 whole-exome sequencing tests. The interpretations are accessed either directly via the Web interface or programmatically via the existing API. An accurate and up-to-date knowledge base of genomic alterations of clinical significance is critical to the success of precision medicine programs. The open-access, programmatically accessible PMKB represents an important attempt at creating such a resource in the field of oncology. The PMKB was designed to help collect and maintain clinical-grade mutation interpretations and facilitate reporting for clinical cancer genomic testing. The PMKB was also designed to enable the creation of clinical cancer genomics automated reporting pipelines via an API. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  12. Installation of the National Transport Code Collaboration Data Server at the ITPA International Multi-tokamak Confinement Profile Database

    NASA Astrophysics Data System (ADS)

    Roach, Colin; Carlsson, Johan; Cary, John R.; Alexander, David A.

    2002-11-01

    The National Transport Code Collaboration (NTCC) has developed an array of software, including a data client/server. The data server, which is written in C++, serves local data (in the ITER Profile Database format) as well as remote data (by accessing one or several MDS+ servers). The client, a web-invocable Java applet, provides a uniform, intuitive, user-friendly, graphical interface to the data server. The uniformity of the interface relieves the user from the trouble of mastering the differences between different data formats and lets him/her focus on the essentials: plotting and viewing the data. The user runs the client by visiting a web page using any Java capable Web browser. The client is automatically downloaded and run by the browser. A reference to the data server is then retrieved via the standard Web protocol (HTTP). The communication between the client and the server is then handled by the mature, industry-standard CORBA middleware. CORBA has bindings for all common languages and many high-quality implementations are available (both Open Source and commercial). The NTCC data server has been installed at the ITPA International Multi-tokamak Confinement Profile Database, which is hosted by the UKAEA at Culham Science Centre. The installation of the data server is protected by an Internet firewall. To make it accessible to clients outside the firewall some modifications of the server were required. The working version of the ITPA confinement profile database is not open to the public. Authentification of legitimate users is done utilizing built-in Java security features to demand a password to download the client. We present an overview of the NTCC data client/server and some details of how the CORBA firewall-traversal issues were resolved and how the user authentification is implemented.

  13. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong

    2015-08-01

    Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.

  14. The cancer precision medicine knowledge base for structured clinical-grade mutations and interpretations

    PubMed Central

    Huang, Linda; Fernandes, Helen; Zia, Hamid; Tavassoli, Peyman; Rennert, Hanna; Pisapia, David; Imielinski, Marcin; Sboner, Andrea; Rubin, Mark A; Kluk, Michael

    2017-01-01

    Objective: This paper describes the Precision Medicine Knowledge Base (PMKB; https://pmkb.weill.cornell.edu), an interactive online application for collaborative editing, maintenance, and sharing of structured clinical-grade cancer mutation interpretations. Materials and Methods: PMKB was built using the Ruby on Rails Web application framework. Leveraging existing standards such as the Human Genome Variation Society variant description format, we implemented a data model that links variants to tumor-specific and tissue-specific interpretations. Key features of PMKB include support for all major variant types, standardized authentication, distinct user roles including high-level approvers, and detailed activity history. A REpresentational State Transfer (REST) application-programming interface (API) was implemented to query the PMKB programmatically. Results: At the time of writing, PMKB contains 457 variant descriptions with 281 clinical-grade interpretations. The EGFR, BRAF, KRAS, and KIT genes are associated with the largest numbers of interpretable variants. PMKB’s interpretations have been used in over 1500 AmpliSeq tests and 750 whole-exome sequencing tests. The interpretations are accessed either directly via the Web interface or programmatically via the existing API. Discussion: An accurate and up-to-date knowledge base of genomic alterations of clinical significance is critical to the success of precision medicine programs. The open-access, programmatically accessible PMKB represents an important attempt at creating such a resource in the field of oncology. Conclusion: The PMKB was designed to help collect and maintain clinical-grade mutation interpretations and facilitate reporting for clinical cancer genomic testing. The PMKB was also designed to enable the creation of clinical cancer genomics automated reporting pipelines via an API. PMID:27789569

  15. User Interface Composition with COTS-UI and Trading Approaches: Application for Web-Based Environmental Information Systems

    NASA Astrophysics Data System (ADS)

    Criado, Javier; Padilla, Nicolás; Iribarne, Luis; Asensio, Jose-Andrés

    Due to the globalization of the information and knowledge society on the Internet, modern Web-based Information Systems (WIS) must be flexible and prepared to be easily accessible and manageable in real-time. In recent times it has received a special interest the globalization of information through a common vocabulary (i.e., ontologies), and the standardized way in which information is retrieved on the Web (i.e., powerful search engines, and intelligent software agents). These same principles of globalization and standardization should also be valid for the user interfaces of the WIS, but they are built on traditional development paradigms. In this paper we present an approach to reduce the gap of globalization/standardization in the generation of WIS user interfaces by using a real-time "bottom-up" composition perspective with COTS-interface components (type interface widgets) and trading services.

  16. Enabling the development of Community Extensions to GI-cat - the SIB-ESS-C case study

    NASA Astrophysics Data System (ADS)

    Bigagli, L.; Meier, N.; Boldrini, E.; Gerlach, R.

    2009-04-01

    GI-cat is a Java software package that implements discovery and access services for disparate geospatial resources. An instance of GI-cat provides a single point of service for querying and accessing remote, as well as local, heterogeneous sources of geospatial information, either through standard interfaces, or taking advantage of GI-cat advanced features, such as incremental responses, query feedback, etc. GI-cat supports a number of de-iure and de-facto standards, but can also be extended to additional community catalog/inventory services, by defining appropriate mediation components. The GI-cat and the SIB-ESS-C development teams collaborated in the development of a mediator to the Siberian Earth Science System Cluster (SIB-ESS-C), a web-based infrastructure to support the communities of environmental and Earth System research in Siberia. This activity resulted in the identification of appropriate technologies and internal mechanisms supporting the development of GI-cat extensions, that are the object of this work. GI-cat is actually built up of a modular framework of SOA components, that can be variously arranged to fit the needs of a community of users. For example, a particular GI-cat instance may be configured to provide discovery functionalities onto an OGC WMS; or to adapt a THREDDS catalog to the standard OGC CSW interface; or to merge a number of CDI repositories into a single, more efficient catalog. The flexibility of GI-cat framework is achieved thanks to its design, that follows the Tree of Responsibility (ToR) pattern and the Uniform Pipe and Filter architectural style. This approach allows the building of software blocks that can be flexibly reused and composed in multiple ways. In fact, the components that make up any GI-cat configuration all implement two common interfaces (i.e. IChainNode and ICatalogService), that support chaining one component to another . Hence, it would suffice to implement those interfaces (plus an appropriate factory class: the mechanism used to create GI-cat components) to support a custom community catalog/inventory service in GI-cat. In general, all the terminal nodes of a GI-cat configuration chain are in charge of mediating between the GI-cat common interfaces and a backend, so we implemented a default behavior in an abstract class, termed Accessor, to be more easily subclassed. Moreover, we identified several typical backend scenarios and provided specialized Accessor subclasses, even simpler to implement. For example, in case of a coarse-grained backend service, that responds its data all at once, a specialized Accessor can retrieve the whole content the first time, and subsequently browse/query the local copy of the data. This was the approach followed for the development of SibesscAccessor. The SIB-ESS-C case study is also noticeable because it requires mediating between the relational and the semi-structured data models. In fact, SIB-ESS-C data are stored in a relational database, to provide performant access even to huge amounts of data. The SibesscAccessor is in charge of establishing a JDBC connection to the database, reading the data by means of SQL statements, creating Java objects according to the ISO 19115 data model, and marshalling the resulting information to an XML document. During the implementation of the SibesscAccessor, the mix of technologies and deployment environments and the geographical distribution of the development teams turned out to be important issues. To solve them, we relied on technologies and tools for collaborative software development: the Maven build system, the SVN version control system, the XPlanner project planning and tracking tool, and of course VOIP tools. Moreover, we shipped the Accessor Development Kit (ADK) Java library, containing the classes needed for extending GI-cat to custom community catalog/inventory services and other supporting material (documentation, best-practices, examples). The ADK is distributed as a Maven artifact, to simplify dependency management and ease the common tasks of testing, packaging, etc. The SibesscAccessor was the first custom addition to the set of GI-cat accessors. Later, also the so-called Standard Accessors library has been refactored onto the ADK. The SIB-ESS-C case study also gave us the opportunity to refine our policies for collaborative software development. Besides, several improvements were made to the overall GI-cat data model and framework. Finally, the SIB-ESS-C development team developed a GI-cat web client by means of Web 2.0 technologies (JavaScript, XML, HTML, CSS, etc.) The client can easily be integrated in any HTML context on any web page. The web GUI allows the user to define requests to GI-cat by entering parameter strings and/or selecting an area of interest on a map. The client sends its request to GI-cat via SOAP through HTTP-POST, parses GI-cat SOAP responses and presents user-friendly information on a web page.

  17. Ultra-low power high-dynamic range color pixel embedding RGB to r-g chromaticity transformation

    NASA Astrophysics Data System (ADS)

    Lecca, Michela; Gasparini, Leonardo; Gottardi, Massimo

    2014-05-01

    This work describes a novel color pixel topology that converts the three chromatic components from the standard RGB space into the normalized r-g chromaticity space. This conversion is implemented with high-dynamic range and with no dc power consumption, and the auto-exposure capability of the sensor ensures to capture a high quality chromatic signal, even in presence of very bright illuminants or in the darkness. The pixel is intended to become the basic building block of a CMOS color vision sensor, targeted to ultra-low power applications for mobile devices, such as human machine interfaces, gesture recognition, face detection. The experiments show that significant improvements of the proposed pixel with respect to standard cameras in terms of energy saving and accuracy on data acquisition. An application to skin color-based description is presented.

  18. System considerations, projected requirements and applications for aeronautical mobile satellite communications for air traffic services

    NASA Technical Reports Server (NTRS)

    Mcdonald, K. D.; Miller, C. M.; Scales, W. C.; Dement, D. K.

    1990-01-01

    The projected application and requirements in the near term (to 1995) and far term (to 2010) for aeronautical mobile services supporting air traffic control operations are addressed. The implications of these requirements on spectrum needs, and the resulting effects on the satellite design and operation are discussed. The U.S. is working with international standards and regulatory organizations to develop the necessary aviation standards, signalling protocols, and implementation methods. In the provision of aeronautical safety services, a number of critical issues were identified, including system reliability and availability, access time, channel restoration time, interoperability, pre-emption techniques, and the system network interfaces. Means for accomplishing these critical services in the aeronautical mobile satellite service (AMSS), and the various activities relating to the future provision of aeronautical safety services are addressed.

  19. System considerations, projected requirements and applications for aeronautical mobile satellite communications for air traffic services

    NASA Astrophysics Data System (ADS)

    McDonald, K. D.; Miller, C. M.; Scales, W. C.; Dement, D. K.

    The projected application and requirements in the near term (to 1995) and far term (to 2010) for aeronautical mobile services supporting air traffic control operations are addressed. The implications of these requirements on spectrum needs, and the resulting effects on the satellite design and operation are discussed. The U.S. is working with international standards and regulatory organizations to develop the necessary aviation standards, signalling protocols, and implementation methods. In the provision of aeronautical safety services, a number of critical issues were identified, including system reliability and availability, access time, channel restoration time, interoperability, pre-emption techniques, and the system network interfaces. Means for accomplishing these critical services in the aeronautical mobile satellite service (AMSS), and the various activities relating to the future provision of aeronautical safety services are addressed.

  20. PearlTrees web-based interface for teaching informatics in the radiology residency

    NASA Astrophysics Data System (ADS)

    Licurse, Mindy Y.; Cook, Tessa S.

    2014-03-01

    Radiology and imaging informatics education have rapidly evolved over the past few decades. With the increasing recognition that future growth and maintenance of radiology practices will rely heavily on radiologists with fundamentally sound informatics skills, the onus falls on radiology residency programs to properly implement and execute an informatics curriculum. In addition, the American Board of Radiology may choose to include even more informatics on the new board examinations. However, the resources available for didactic teaching and guidance most especially at the introductory level are widespread and varied. Given the breadth of informatics, a centralized web-based interface designed to serve as an adjunct to standardized informatics curriculums as well as a stand-alone for other interested audiences is desirable. We present the development of a curriculum using PearlTrees, an existing web-interface based on the concept of a visual interest graph that allows users to collect, organize, and share any URL they find online as well as to upload photos and other documents. For our purpose, the group of "pearls" includes informatics concepts linked by appropriate hierarchal relationships. The curriculum was developed using a combination of our institution's current informatics fellowship curriculum, the Practical Imaging Informatics textbook1 and other useful online resources. After development of the initial interface and curriculum has been publicized, we anticipate that involvement by the informatics community will help promote collaborations and foster mentorships at all career levels.

  1. Modular Open Network ARCHitecture (MONARCH): Transitioning plug-and-play to aerospace

    NASA Astrophysics Data System (ADS)

    Martin, M.; Lyke, J.

    The Air Force Research Laboratory (AFRL) developed an initial plug-and-play (PnP) capability for spacecraft, similar to USB on personal computers, which better defines hardware and software interfaces and incorporates self-discovery and auto-configuration in order to simplify spacecraft development and reduce cost and schedule. PnP technology was matured through a suborbital PnP flight experiment in September 2007 and a secondary Spacecraft Avionics Experiment (SAE) payload on the TacSat-3 satellite, which launched in May 2009. AFRL developed and submitted a complete set of PnP standards through the American Institute of Aeronautics and Astronautics (AIAA) in 2011. Space electronics to adapt existing satellite components and implement full PnP on satellites in accordance with these AFRL standards was independently developed in alternate hardware implementations by Goodrich Corp under AFRL and by Northrop Grumman under Operationally Responsive Space (ORS). In 2011, AFRL conducted a cost-benefit analysis of PnP and assembled a collaborative review board (CRB) in Sept 2011 to evaluate PnP. This CRB was comprised of representatives from Space and Missiles Center (SMC), National Reconnaissance Organization (NRO), Naval Research Laboratory (NRL), John Hopkins University (JHU) Applied Physics Laboratory (APL), The Aerospace Corporation, and several large commercial and DOD satellite developers. This CRB laid out a transition path to develop and implement PnP standards for implementation in large (> 1000 kg) DOD and commercial satellites. Transition of PnP technology into operational systems continues in PnP architecture studies for SMC, PnP products from multiple space industry vendors, commercial implementations of PnP, and the Northrop Grumman ORS-2 spacecraft currently project to fly in 2014-2015. This paper provides details related to development of PnP technology, AFRL's cost-benefit analysis of PnP, recommendations of the PnP CRB, and on-going efforts to mature - nd fly PnP technology.

  2. Virtual optical interfaces for the transportation industry

    NASA Astrophysics Data System (ADS)

    Hejmadi, Vic; Kress, Bernard

    2010-04-01

    We present a novel implementation of virtual optical interfaces for the transportation industry (automotive and avionics). This new implementation includes two functionalities in a single device; projection of a virtual interface and sensing of the position of the fingers on top of the virtual interface. Both functionalities are produced by diffraction of laser light. The device we are developing include both functionalities in a compact package which has no optical elements to align since all of them are pre-aligned on a single glass wafer through optical lithography. The package contains a CMOS sensor which diffractive objective lens is optimized for the projected interface color as well as for the IR finger position sensor based on structured illumination. Two versions are proposed: a version which senses the 2d position of the hand and a version which senses the hand position in 3d.

  3. Plugging Into GEOSS - A Data Center Takes the Leap

    NASA Astrophysics Data System (ADS)

    Khalsa, S. S.; Weaver, R. L.; Duerr, R. E.; Shaw, A.

    2008-12-01

    The data sets managed and distributed by the National Snow and Ice Data Center in Boulder, Colorado are accessible through a variety of interfaces: custom web services, WIST, which is the NASA EOS Data System interface, and by simple FTP. The Global Earth Observation System of Systems, GEOSS, offers the potential to make our data visible and accessible in the context of a much larger and more widely available system. But what does a data center have to do to tie into this larger system? What are the optimal data formats and protocols that should be maintained? What metadata standards and services should we sustain in order to maximize the visibility of our data? How will our holdings in existing catalogs be harvested by GEOSS? We address these questions through a pilot study that we report on in this paper. On June 2, 2008 the Group on Earth Observation, GEO, announced that the GEOSS Common Infrastructure (GCI) was "open for business," and that this Initial Operating Capability (IOC) was beginning a 1-year testing and evaluation period. The purpose of the IOC is two-fold: first, to encourage Earth observation providers to populate GEOSS by registering their data sets, services, and other components; and 2) to allow the global community to use, evaluate and thereby improve the GCI. NSIDC is contributing to both objectives. The GEOSS 10-Year Implementation Plan specifies, at a very high level, recommended standards for connectivity for services, data and metadata. GEO has also published Tactical and Strategic Guidance Documents to help data providers like NSIDC decide how it should proceed to become an active participant in the GEOSS. GEOSS and NSIDC are both adopting many of the OGC standards as their respective systems evolve. But how well do the OGC implementations of each of these entities mesh? What are the gaps, what are the currently less well developed yet critical path standards that require work? We describe our experiences in registering several data sets having differing levels and types of associated services. We review the GEOSS efforts and study their published requirements and standards and see how well they mesh to the NSIDC systems, metadata and data distribution systems, and then describe our experiences in making our data and services available via the GCI.

  4. Syringe-Injectable Electronics with a Plug-and-Play Input/Output Interface.

    PubMed

    Schuhmann, Thomas G; Yao, Jun; Hong, Guosong; Fu, Tian-Ming; Lieber, Charles M

    2017-09-13

    Syringe-injectable mesh electronics represent a new paradigm for brain science and neural prosthetics by virtue of the stable seamless integration of the electronics with neural tissues, a consequence of the macroporous mesh electronics structure with all size features similar to or less than individual neurons and tissue-like flexibility. These same properties, however, make input/output (I/O) connection to measurement electronics challenging, and work to-date has required methods that could be difficult to implement by the life sciences community. Here we present a new syringe-injectable mesh electronics design with plug-and-play I/O interfacing that is rapid, scalable, and user-friendly to nonexperts. The basic design tapers the ultraflexible mesh electronics to a narrow stem that routes all of the device/electrode interconnects to I/O pads that are inserted into a standard zero insertion force (ZIF) connector. Studies show that the entire plug-and-play mesh electronics can be delivered through capillary needles with precise targeting using microliter-scale injection volumes similar to the standard mesh electronics design. Electrical characterization of mesh electronics containing platinum (Pt) electrodes and silicon (Si) nanowire field-effect transistors (NW-FETs) demonstrates the ability to interface arbitrary devices with a contact resistance of only 3 Ω. Finally, in vivo injection into mice required only minutes for I/O connection and yielded expected local field potential (LFP) recordings from a compact head-stage compatible with chronic studies. Our results substantially lower barriers for use by new investigators and open the door for increasingly sophisticated and multifunctional mesh electronics designs for both basic and translational studies.

  5. Distributed Multi-interface Catalogue for Geospatial Data

    NASA Astrophysics Data System (ADS)

    Nativi, S.; Bigagli, L.; Mazzetti, P.; Mattia, U.; Boldrini, E.

    2007-12-01

    Several geosciences communities (e.g. atmospheric science, oceanography, hydrology) have developed tailored data and metadata models and service protocol specifications for enabling online data discovery, inventory, evaluation, access and download. These specifications are conceived either profiling geospatial information standards or extending the well-accepted geosciences data models and protocols in order to capture more semantics. These artifacts have generated a set of related catalog -and inventory services- characterizing different communities, initiatives and projects. In fact, these geospatial data catalogs are discovery and access systems that use metadata as the target for query on geospatial information. The indexed and searchable metadata provide a disciplined vocabulary against which intelligent geospatial search can be performed within or among communities. There exists a clear need to conceive and achieve solutions to implement interoperability among geosciences communities, in the context of the more general geospatial information interoperability framework. Such solutions should provide search and access capabilities across catalogs, inventory lists and their registered resources. Thus, the development of catalog clearinghouse solutions is a near-term challenge in support of fully functional and useful infrastructures for spatial data (e.g. INSPIRE, GMES, NSDI, GEOSS). This implies the implementation of components for query distribution and virtual resource aggregation. These solutions must implement distributed discovery functionalities in an heterogeneous environment, requiring metadata profiles harmonization as well as protocol adaptation and mediation. We present a catalog clearinghouse solution for the interoperability of several well-known cataloguing systems (e.g. OGC CSW, THREDDS catalog and data services). The solution implements consistent resource discovery and evaluation over a dynamic federation of several well-known cataloguing and inventory systems. Prominent features include: 1)Support to distributed queries over a hierarchical data model, supporting incremental queries (i.e. query over collections, to be subsequently refined) and opaque/translucent chaining; 2)Support to several client protocols, through a compound front-end interface module. This allows to accommodate a (growing) number of cataloguing standards, or profiles thereof, including the OGC CSW interface, ebRIM Application Profile (for Core ISO Metadata and other data models), and the ISO Application Profile. The presented catalog clearinghouse supports both the opaque and translucent pattern for service chaining. In fact, the clearinghouse catalog may be configured either to completely hide the underlying federated services or to provide clients with services information. In both cases, the clearinghouse solution presents a higher level interface (i.e. OGC CSW) which harmonizes multiple lower level services (e.g. OGC CSW, WMS and WCS, THREDDS, etc.), and handles all control and interaction with them. In the translucent case, client has the option to directly access the lower level services (e.g. to improve performances). In the GEOSS context, the solution has been experimented both as a stand-alone user application and as a service framework. The first scenario allows a user to download a multi-platform client software and query a federation of cataloguing systems, that he can customize at will. The second scenario support server-side deployment and can be flexibly adapted to several use-cases, such as intranet proxy, catalog broker, etc.

  6. Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor

    NASA Astrophysics Data System (ADS)

    Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert

    2009-10-01

    Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.

  7. 76 FR 23630 - Office of New Reactors; Proposed Revision 2 to Standard Review Plan, Section 1.0 on Introduction...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-27

    ... Standard Review Plan, Section 1.0 on Introduction and Interfaces AGENCY: Nuclear Regulatory Commission (NRC... Revision 2 to Standard Review Plan (SRP), Section 1.0, ``Introduction and Interfaces'' (Agencywide Documents Access and Management System (ADAMS) Accession No. ML110110573). The Office of New Reactors (NRO...

  8. Environmental Models as a Service: Enabling Interoperability ...

    EPA Pesticide Factsheets

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantage of streamlined deployment processes and affordable cloud access to move algorithms and data to the web for discoverability and consumption. In these deployments, environmental models can become available to end users through RESTful web services and consistent application program interfaces (APIs) that consume, manipulate, and store modeling data. RESTful modeling APIs also promote discoverability and guide usability through self-documentation. Embracing the RESTful paradigm allows models to be accessible via a web standard, and the resulting endpoints are platform- and implementation-agnostic while simultaneously presenting significant computational capabilities for spatial and temporal scaling. RESTful APIs present data in a simple verb-noun web request interface: the verb dictates how a resource is consumed using HTTP methods (e.g., GET, POST, and PUT) and the noun represents the URL reference of the resource on which the verb will act. The RESTful API can self-document in both the HTTP response and an interactive web page using the Open API standard. This lets models function as an interoperable service that promotes sharing, documentation, and discoverability. Here, we discuss the

  9. Implementing a Quantitative Analysis Design Tool for Future Generation Interfaces

    DTIC Science & Technology

    2012-03-01

    with Remotely Piloted Aircraft (RPA) has resulted in the need of a platform to evaluate interface design. The Vigilant Spirit Control Station ( VSCS ...Spirit interface. A modified version of the HCI Index was successfully applied to perform a quantitative analysis of the baseline VSCS interface and...time of the original VSCS interface. These results revealed the effectiveness of the tool and demonstrated in the design of future generation

  10. Top ten challenges when interfacing a laboratory information system to an electronic health record: Experience at a large academic medical center.

    PubMed

    Petrides, Athena K; Tanasijevic, Milenko J; Goonan, Ellen M; Landman, Adam B; Kantartjis, Michalis; Bates, David W; Melanson, Stacy E F

    2017-10-01

    Recent U.S. government regulations incentivize implementation of an electronic health record (EHR) with computerized order entry and structured results display. Many institutions have also chosen to interface their EHR to their laboratory information system (LIS). Reported long-term benefits include increased efficiency and improved quality and safety. In order to successfully implement an interfaced EHR-LIS, institutions must plan years in advance and anticipate the impact of an integrated system. It can be challenging to fully understand the technical, workflow and resource aspects and adequately prepare for a potentially protracted system implementation and the subsequent stabilization. We describe the top ten challenges that we encountered in our clinical laboratories following the implementation of an interfaced EHR-LIS and offer suggestions on how to overcome these challenges. This study was performed at a 777-bed, tertiary care center which recently implemented an interfaced EHR-LIS. Challenges were recorded during EHR-LIS implementation and stabilization and the authors describe the top ten. Our top ten challenges were selection and harmonization of test codes, detailed training for providers on test ordering, communication with EHR provider champions during the build process, fluid orders and collections, supporting specialized workflows, sufficient reports and metrics, increased volume of inpatient venipunctures, adequate resources during stabilization, unanticipated changes to laboratory workflow and ordering specimens for anatomic pathology. A few suggestions to overcome these challenges include regular meetings with clinical champions, advanced considerations of reports and metrics that will be needed, adequate training of laboratory staff on new workflows in the EHR and defining all tests including anatomic pathology in the LIS. EHR-LIS implementations have many challenges requiring institutions to adapt and develop new infrastructures. This article should be helpful to other institutions facing or undergoing a similar endeavor. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Neutron Source Facility Training Simulator Based on EPICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Young Soo; Wei, Thomas Y.; Vilim, Richard B.

    A plant operator training simulator is developed for training the plant operators as well as for design verification of plant control system (PCS) and plant protection system (PPS) for the Kharkov Institute of Physics and Technology Neutron Source Facility. The simulator provides the operator interface for the whole plant including the sub-critical assembly coolant loop, target coolant loop, secondary coolant loop, and other facility systems. The operator interface is implemented based on Experimental Physics and Industrial Control System (EPICS), which is a comprehensive software development platform for distributed control systems. Since its development at Argonne National Laboratory, it has beenmore » widely adopted in the experimental physics community, e.g. for control of accelerator facilities. This work is the first implementation for a nuclear facility. The main parts of the operator interface are the plant control panel and plant protection panel. The development involved implementation of process variable database, sequence logic, and graphical user interface (GUI) for the PCS and PPS utilizing EPICS and related software tools, e.g. sequencer for sequence logic, and control system studio (CSS-BOY) for graphical use interface. For functional verification of the PCS and PPS, a plant model is interfaced, which is a physics-based model of the facility coolant loops implemented as a numerical computer code. The training simulator is tested and demonstrated its effectiveness in various plant operation sequences, e.g. start-up, shut-down, maintenance, and refueling. It was also tested for verification of the plant protection system under various trip conditions.« less

  12. The successful implementation of a licensed data management interface between a Sunquest(®) laboratory information system and an AB SCIEX™ mass spectrometer.

    PubMed

    French, Deborah; Terrazas, Enrique

    2013-01-01

    Interfacing complex laboratory equipment to laboratory information systems (LIS) has become a more commonly encountered problem in clinical laboratories, especially for instruments that do not have an interface provided by the vendor. Liquid chromatography-tandem mass spectrometry is a great example of such complex equipment, and has become a frequent addition to clinical laboratories. As the testing volume on such instruments can be significant, manual data entry will also be considerable and the potential for concomitant transcription errors arises. Due to this potential issue, our aim was to interface an AB SCIEX™ mass spectrometer to our Sunquest(®) LIS. WE LICENSED SOFTWARE FOR THE DATA MANAGEMENT INTERFACE FROM THE UNIVERSITY OF PITTSBURGH, BUT EXTENDED THIS WORK AS FOLLOWS: The interface was designed so that it would accept a text file exported from the AB SCIEX™ × 5500 QTrap(®) mass spectrometer, pre-process the file (using newly written code) into the correct format and upload it into Sunquest(®) via file transfer protocol. The licensed software handled the majority of the interface tasks with the exception of converting the output from the Analyst(®) software to the required Sunquest(®) import format. This required writing of a "pre-processor" by one of the authors which was easily integrated with the supplied software. We successfully implemented the data management interface licensed from the University of Pittsburgh. Given the coding that was required to write the pre-processor, and alterations to the source code that were performed when debugging the software, we would suggest that before a laboratory decides to implement such an interface, it would be necessary to have a competent computer programmer available.

  13. Nested Dissection Interface Reconstruction in Pececillo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibben, Zechariah Joel

    A nested dissection method for interface reconstruction in a volume tracking framework has been implemented in Pececillo. This method provides a significant improvement over the traditional onion-skin method, which does not appropriately handle T-shaped multimaterial intersections and dynamic contact lines present in additive manufacturing simulations. The resulting implementation lays the groundwork for further re- search in numerical contact angle estimates.

  14. Software engineering activities at SEI (Software Engineering Institute)

    NASA Technical Reports Server (NTRS)

    Chittister, Clyde

    1990-01-01

    Prototyping was shown to ease system specification and implementation, especially in the area of user interfaces. Other prototyping approaches do not allow for the evolution of the prototype into a production system or support maintenance after the system is fielded. A set of goals is presented for a modern user interface environment and Serpent, a prototype implementation that achieves these goals, is described.

  15. Architecture of a general purpose embedded Slow-Control Adapter ASIC for future high-energy physics experiments

    NASA Astrophysics Data System (ADS)

    Gabrielli, Alessandro; Loddo, Flavio; Ranieri, Antonio; De Robertis, Giuseppe

    2008-10-01

    This work is aimed at defining the architecture of a new digital ASIC, namely Slow-Control Adapter (SCA), which will be designed in a commercial 130-nm CMOS technology. This chip will be embedded within a high-speed data acquisition optical link (GBT) to control and monitor the front-end electronics in future high-energy physics experiments. The GBT link provides a transparent transport layer between the SCA and control electronics in the counting room. The proposed SCA supports a variety of common bus protocols to interface with end-user general-purpose electronics. Between the GBT and the SCA a standard 100 Mb/s IEEE-802.3 compatible protocol will be implemented. This standard protocol allows off-line tests of the prototypes using commercial components that support the same standard. The project is justified because embedded applications in modern large HEP experiments require particular care to assure the lowest possible power consumption, still offering the highest reliability demanded by very large particle detectors.

  16. Evaluation and Analysis of the ANSI X3T9.5 (FDDI) PMD and Proposed SMF-PMD as Influenced by Various Fiber Link Characteristics

    NASA Technical Reports Server (NTRS)

    Wernicki, M. Chris

    1991-01-01

    The purpose of this project is to evaluate the operational parameters of the Kennedy Space Center (KSC) fiber optic cable plant. The evaluation is based on the Fiber Distributed Data Interface (FDDI) Physical Medium Dependent (PMD) and Single Mode Fiber (SMF) PMD standards. From the KSC fiber profile, it would be necessary to develop the modifications needed in existing FDDI PMD and proposed SMF-PMD standards to provide for FDDI implementation and operation at KSC. This analysis should examine the major factors that influence the operating conditions of the KSC fiber plant. These factors would include, but are not limited to the number and type of connectors, attenuation and dispersion characteristics of the fiber, non-standard fiber sizes, modal bandwidth, and many other relevant or significant fiber plant characteristics that effect FDDI characteristics. This analysis is needed to gain a better understanding of overall impact that each of these factors have on FDDI performance at KSC.

  17. Programs for Testing Processor-in-Memory Computing Systems

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  18. Performance management of multiple access communication networks

    NASA Astrophysics Data System (ADS)

    Lee, Suk; Ray, Asok

    1993-12-01

    This paper focuses on conceptual design, development, and implementation of a performance management tool for computer communication networks to serve large-scale integrated systems. The objective is to improve the network performance in handling various types of messages by on-line adjustment of protocol parameters. The techniques of perturbation analysis of Discrete Event Dynamic Systems (DEDS), stochastic approximation (SA), and learning automata have been used in formulating the algorithm of performance management. The efficacy of the performance management tool has been demonstrated on a network testbed. The conceptual design presented in this paper offers a step forward to bridging the gap between management standards and users' demands for efficient network operations since most standards such as ISO (International Standards Organization) and IEEE address only the architecture, services, and interfaces for network management. The proposed concept of performance management can also be used as a general framework to assist design, operation, and management of various DEDS such as computer integrated manufacturing and battlefield C(sup 3) (Command, Control, and Communications).

  19. Caching strategies for improving performance of web-based Geographic applications

    NASA Astrophysics Data System (ADS)

    Liu, M.; Brodzik, M.; Collins, J. A.; Lewis, S.; Oldenburg, J.

    2012-12-01

    The NASA Operation IceBridge mission collects airborne remote sensing measurements to bridge the gap between NASA's Ice, Cloud and Land Elevation Satellite (ICESat) mission and the upcoming ICESat-2 mission. The IceBridge Data Portal from the National Snow and Ice Data Center provides an intuitive web interface for accessing IceBridge mission observations and measurements. Scientists and users usually do not have knowledge about the individual campaigns but are interested in data collected in a specific place. We have developed a high-performance map interface to allow users to quickly zoom to an area of interest and see any Operation IceBridge overflights. The map interface consists of two layers: the user can pan and zoom on the base map layer; the flight line layer that overlays the base layer provides all the campaign missions that intersect with the current map view. The user can click on the flight campaigns and download the data as needed. The OpenGIS® Web Map Service Interface Standard (WMS) provides a simple HTTP interface for requesting geo-registered map images from one or more distributed geospatial databases. Web Feature Service (WFS) provides an interface allowing requests for geographical features across the web using platform-independent calls. OpenLayers provides vector support (points, polylines and polygons) to build a WMS/WFS client for displaying both layers on the screen. Map Server, an open source development environment for building spatially enabled internet applications, is serving the WMS and WFS spatial data to OpenLayers. Early releases of the portal displayed unacceptably poor load time performance for flight lines and the base map tiles. This issue was caused by long response times from the map server in generating all map tiles and flight line vectors. We resolved the issue by implementing various caching strategies on top of the WMS and WFS services, including the use of Squid (www.squid-cache.org) to cache frequently-used content. Our presentation includes the architectural design of the application, and how we use OpenLayers, WMS and WFS with Squid to build a responsive web application capable of efficiently displaying geospatial data to allow the user to quickly interact with the displayed information. We describe the design, implementation and performance improvement of our caching strategies, and the tools and techniques developed to assist our data caching strategies.

  20. A Modeling Pattern for Layered System Interfaces

    NASA Technical Reports Server (NTRS)

    Shames, Peter M.; Sarrel, Marc A.

    2015-01-01

    Communications between systems is often initially represented at a single, high level of abstraction, a link between components. During design evolution it is usually necessary to elaborate the interface model, defining it from several different, related viewpoints and levels of abstraction. This paper presents a pattern to model such multi-layered interface architectures simply and efficiently, in a way that supports expression of technical complexity, interfaces and behavior, and analysis of complexity. Each viewpoint and layer of abstraction has its own properties and behaviors. System elements are logically connected both horizontally along the communication path, and vertically across the different layers of protocols. The performance of upper layers depends on the performance of lower layers, yet the implementation of lower layers is intentionally opaque to upper layers. Upper layers are hidden from lower layers except as sources and sinks of data. The system elements may not be linked directly at each horizontal layer but only via a communication path, and end-to-end communications may depend on intermediate components that are hidden from them, but may need to be shown in certain views and analyzed for certain purposes. This architectural model pattern uses methods described in ISO 42010, Recommended Practice for Architectural Description of Software-intensive Systems and CCSDS 311.0-M-1, Reference Architecture for Space Data Systems (RASDS). A set of useful viewpoints and views are presented, along with the associated modeling representations, stakeholders and concerns. These viewpoints, views, and concerns then inform the modeling pattern. This pattern permits viewing the system from several different perspectives and at different layers of abstraction. An external viewpoint treats the systems of interest as black boxes and focuses on the applications view, another view exposes the details of the connections and other components between the black boxes. An internal view focuses on the implementation within the systems of interest, either showing external interface bindings and specific standards that define the communication stack profile or at the level of internal behavior. Orthogonally, a horizontal view isolates a single layer and a vertical viewpoint shows all layers at a single interface point between the systems of interest. Each of these views can in turn be described from both behavioral and structural viewpoints.

  1. NASA's Man-Systems Integration Standards: A Human Factors Engineering Standard for Everyone in the Nineties

    NASA Technical Reports Server (NTRS)

    Booher, Cletis R.; Goldsberry, Betty S.

    1994-01-01

    During the second half of the 1980s, a document was created by the National Aeronautics and Space Administration (NASA) to aid in the application of good human factors engineering and human interface practices to the design and development of hardware and systems for use in all United States manned space flight programs. This comprehensive document, known as NASA-STD-3000, the Man-Systems Integration Standards (MSIS), attempts to address, from a human factors engineering/human interface standpoint, all of the various types of equipment with which manned space flight crew members must deal. Basically, all of the human interface situations addressed in the MSIS are present in terrestrially based systems also. The premise of this paper is that, starting with this already created standard, comprehensive documents addressing human factors engineering and human interface concerns could be developed to aid in the design of almost any type of equipment or system which humans interface with in any terrestrial environment. Utilizing the systems and processes currently in place in the MSIS Development Facility at the Johnson Space Center in Houston, TX, any number of MSIS volumes addressing the human factors / human interface needs of any terrestrially based (or, for that matter, airborne) system could be created.

  2. Speech Recognition as a Transcription Aid: A Randomized Comparison With Standard Transcription

    PubMed Central

    Mohr, David N.; Turner, David W.; Pond, Gregory R.; Kamath, Joseph S.; De Vos, Cathy B.; Carpenter, Paul C.

    2003-01-01

    Objective. Speech recognition promises to reduce information entry costs for clinical information systems. It is most likely to be accepted across an organization if physicians can dictate without concerning themselves with real-time recognition and editing; assistants can then edit and process the computer-generated document. Our objective was to evaluate the use of speech-recognition technology in a randomized controlled trial using our institutional infrastructure. Design. Clinical note dictations from physicians in two specialty divisions were randomized to either a standard transcription process or a speech-recognition process. Secretaries and transcriptionists also were assigned randomly to each of these processes. Measurements. The duration of each dictation was measured. The amount of time spent processing a dictation to yield a finished document also was measured. Secretarial and transcriptionist productivity, defined as hours of secretary work per minute of dictation processed, was determined for speech recognition and standard transcription. Results. Secretaries in the endocrinology division were 87.3% (confidence interval, 83.3%, 92.3%) as productive with the speech-recognition technology as implemented in this study as they were using standard transcription. Psychiatry transcriptionists and secretaries were similarly less productive. Author, secretary, and type of clinical note were significant (p < 0.05) predictors of productivity. Conclusion. When implemented in an organization with an existing document-processing infrastructure (which included training and interfaces of the speech-recognition editor with the existing document entry application), speech recognition did not improve the productivity of secretaries or transcriptionists. PMID:12509359

  3. Integrated Circuit Chip Improves Network Efficiency

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Prior to 1999 and the development of SpaceWire, a standard for high-speed links for computer networks managed by the European Space Agency (ESA), there was no high-speed communications protocol for flight electronics. Onboard computers, processing units, and other electronics had to be designed for individual projects and then redesigned for subsequent projects, which increased development periods, costs, and risks. After adopting the SpaceWire protocol in 2000, NASA implemented the standard on the Swift mission, a gamma ray burst-alert telescope launched in November 2004. Scientists and developers on the James Webb Space Telescope further developed the network version of SpaceWire. In essence, SpaceWire enables more science missions at a lower cost, because it provides a standard interface between flight electronics components; new systems need not be custom built to accommodate individual missions, so electronics can be reused. New protocols are helping to standardize higher layers of computer communication. Goddard Space Flight Center improved on the ESA-developed SpaceWire by enabling standard protocols, which included defining quality of service and supporting plug-and-play capabilities. Goddard upgraded SpaceWire to make the routers more efficient and reliable, with features including redundant cables, simultaneous discrete broadcast pulses, prevention of network blockage, and improved verification. Redundant cables simplify management because the user does not need to worry about which connection is available, and simultaneous broadcast signals allow multiple users to broadcast low-latency side-band signal pulses across the network using the same resources for data communication. Additional features have been added to the SpaceWire switch to prevent network blockage so that more robust networks can be designed. Goddard s verification environment for the link-and-switch implementation continuously randomizes and tests different parts, constantly anticipating situations, which helps improve communications reliability. It has been tested in many different implementations for compatibility.

  4. Miniature Housings for Electronics With Standard Interfaces

    NASA Technical Reports Server (NTRS)

    Howard, David E.; Smith, Dennis A.; Alhorn, Dean C.

    2006-01-01

    A family of general-purpose miniature housings has been designed to contain diverse sensors, actuators, and drive circuits plus associated digital electronic readout and control circuits. The circuits contained in the housings communicate with the external world via standard RS-485 interfaces.

  5. Standard payload computer for the international space station

    NASA Astrophysics Data System (ADS)

    Knott, Karl; Taylor, Chris; Koenig, Horst; Schlosstein, Uwe

    1999-01-01

    This paper describes the development and application of a Standard PayLoad Computer (SPLC) which is being applied by the majority of ESA payloads accommodated on the International Space Station (ISS). The strategy of adopting of a standard computer leads to a radical rethink in the payload data handling procurement process. Traditionally, this has been based on a proprietary development with repeating costs for qualification, spares, expertise and maintenance for each new payload. Implementations have also tended to be unique with very little opportunity for reuse or utilisation of previous developments. While this may to some extent have been justified for short duration one-off missions, the availability of a standard, long term space infrastructure calls for a quite different approach. To support a large number of concurrent payloads, the ISS implementation relies heavily on standardisation, and this is particularly true in the area of payloads. Physical accommodation, data interfaces, protocols, component quality, operational requirements and maintenance including spares provisioning must all conform to a common set of standards. The data handling system and associated computer used by each payload must also comply with these common requirements, and thus it makes little sense to instigate multiple developments for the same task. The opportunity exists to provide a single computer suitable for all payloads, but with only a one-off development and qualification cost. If this is combined with the benefits of multiple procurement, centralised spares and maintenance, there is potential for great savings to be made by all those concerned in the payload development process. In response to the above drivers, the SPLC is based on the following concepts: • A one-off development and qualification process • A modular computer, configurable according to the payload developer's needs from a list of space-qualified items • An `open system' which may be added to by payload developers • Core software providing a suite of common communications services including a verified protocol implementation required to communicate with the ISS • A standardized ground support equipment and accompanying software development environment • The use of commercial hardware and software standards and products.

  6. PathwayAccess: CellDesigner plugins for pathway databases.

    PubMed

    Van Hemert, John L; Dickerson, Julie A

    2010-09-15

    CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.

  7. Experience in the application of Java Technologies in telemedicine

    PubMed Central

    Fedyukin, IV; Reviakin, YG; Orlov, OI; Doarn, CR; Harnett, BM; Merrell, RC

    2002-01-01

    Java language has been demonstrated to be an effective tool in supporting medical image viewing in Russia. This evaluation was completed by obtaining a maximum of 20 images, depending on the client's computer workstation from one patient using a commercially available computer tomography (CT) scanner. The images were compared against standard CT images that were viewed at the site of capture. There was no appreciable difference. The client side is a lightweight component that provides an intuitive interface for end users. Each image is loaded in its own thread and the user can begin work after the first image has been loaded. This feature is especially useful on slow connection speed, 9.6 Kbps for example. The server side, which is implemented by the Java Servlet Engine works more effective than common gateway interface (CGI) programs do. Advantages of the Java Technology place this program on the next level of application development. This paper presents a unique application of Java in telemedicine. PMID:12459045

  8. A single-chip event sequencer and related microcontroller instrumentation for atomic physics research.

    PubMed

    Eyler, E E

    2011-01-01

    A 16-bit digital event sequencer with 50 ns resolution and 50 ns trigger jitter is implemented by using an internal 32-bit timer on a dsPIC30F4013 microcontroller, controlled by an easily modified program written in standard C. It can accommodate hundreds of output events, and adjacent events can be spaced as closely as 1.5 μs. The microcontroller has robust 5 V inputs and outputs, allowing a direct interface to common laboratory equipment and other electronics. A USB computer interface and a pair of analog ramp outputs can be added with just two additional chips. An optional display/keypad unit allows direct interaction with the sequencer without requiring an external computer. Minor additions also allow simple realizations of other complex instruments, including a precision high-voltage ramp generator for driving spectrum analyzers or piezoelectric positioners, and a low-cost proportional integral differential controller and lock-in amplifier for laser frequency stabilization with about 100 kHz bandwidth.

  9. Improvements to Autoplot's HAPI Support

    NASA Astrophysics Data System (ADS)

    Faden, J.; Vandegriff, J. D.; Weigel, R. S.

    2017-12-01

    Autoplot handles data from a variety of data servers. These servers communicate data in different forms, each somewhat different in capabilities and each needing new software to interface. The Heliophysics Application Programmer's Interface (HAPI) attempts to ease this by providing a standard target for clients and servers to meet. Autoplot fully supports reading data from HAPI servers, and support continues to improve as the HAPI server spec matures. This collaboration has already produced robust clients and documentation which would be expensive for groups creating their own protocol. For example, client-side data caching is introduced where Autoplot maintains a cache of data for performance and off-line use. This is a feature we considered for previous data systems, but we could never afford the time to study and implement this carefully. Also, Autoplot itself can be used as a server, making the data it can read and the results of its processing available to other data systems. Autoplot use with other data transmission systems is reviewed as well, outlining features of each system.

  10. Compact, Low-Overhead, MIL-STD-1553B Controller

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Barto, Rod

    2009-01-01

    A compact and flexible controller has been developed to provide MIL-STD- 1553B Remote Terminal (RT) communications and supporting and related functions with minimal demand on the resources of the system in which the controller is to be installed. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD-1553B is commonly used in defense and space applications.) Many other MIL-STD-1553B RT controllers are complicated, and to enable them to function, it is necessary to provide software and to use such ancillary separate hardware devices as microprocessors and dual-port memories. The present controller functions without need for software and any ancillary hardware. In addition, it contains a flexible system interface and extensive support hardware while including on-chip error-checking and diagnostic support circuitry. This controller is implemented within part of a modern field-programmable gate array.

  11. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  12. Experience in the application of Java Technologies in telemedicine.

    PubMed

    Fedyukin, IV; Reviakin, YG; Orlov, OI; Doarn, CR; Harnett, BM; Merrell, RC

    2002-09-17

    Java language has been demonstrated to be an effective tool in supporting medical image viewing in Russia. This evaluation was completed by obtaining a maximum of 20 images, depending on the client's computer workstation from one patient using a commercially available computer tomography (CT) scanner. The images were compared against standard CT images that were viewed at the site of capture. There was no appreciable difference. The client side is a lightweight component that provides an intuitive interface for end users. Each image is loaded in its own thread and the user can begin work after the first image has been loaded. This feature is especially useful on slow connection speed, 9.6 Kbps for example. The server side, which is implemented by the Java Servlet Engine works more effective than common gateway interface (CGI) programs do. Advantages of the Java Technology place this program on the next level of application development. This paper presents a unique application of Java in telemedicine.

  13. Scalable Integrated Multi-Mission Support System (SIMSS) Simulator Release 2.0 for GMSEC

    NASA Technical Reports Server (NTRS)

    Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis

    2012-01-01

    Scalable Integrated Multi-Mission Support System (SIMSS) Simulator Release 2.0 software is designed to perform a variety of test activities related to spacecraft simulations and ground segment checks. This innovation uses the existing SIMSS framework, which interfaces with the GMSEC (Goddard Mission Services Evolution Center) Application Programming Interface (API) Version 3.0 message middleware, and allows SIMSS to accept GMSEC standard messages via the GMSEC message bus service. SIMSS is a distributed, component-based, plug-and-play client-server system that is useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations, and is designed to be user-configurable, or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as to significantly reduce a mission s integration time and risk.

  14. Medication Reconciliation: Work Domain Ontology, prototype development, and a predictive model.

    PubMed

    Markowitz, Eliz; Bernstam, Elmer V; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R

    2011-01-01

    Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System's and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load.

  15. Medication Reconciliation: Work Domain Ontology, Prototype Development, and a Predictive Model

    PubMed Central

    Markowitz, Eliz; Bernstam, Elmer V.; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R.

    2011-01-01

    Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System’s and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load. PMID:22195146

  16. An expert system shell for inferring vegetation characteristics

    NASA Technical Reports Server (NTRS)

    Harrison, P. Ann; Harrison, Patrick R.

    1993-01-01

    The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. VEG is described in detail in several references. The first generation version of VEG was extended. In the first year of this contract, an interface to a file of unknown cover type data was constructed. An interface that allowed the results of VEG to be written to a file was also implemented. A learning system that learned class descriptions from a data base of historical cover type data and then used the learned class descriptions to classify an unknown sample was built. This system had an interface that integrated it into the rest of VEG. The VEG subgoal PROPORTION.GROUND.COVER was completed and a number of additional techniques that inferred the proportion ground cover of a sample were implemented. This work was previously described. The work carried out in the second year of the contract is described. The historical cover type database was removed from VEG and stored as a series of flat files that are external to VEG. An interface to the files was provided. The framework and interface for two new VEG subgoals that estimate the atmospheric effect on reflectance data were built. A new interface that allows the scientist to add techniques to VEG without assistance from the developer was designed and implemented. A prototype Help System that allows the user to get more information about each screen in the VEG interface was also added to VEG.

  17. Accelerating atomistic calculations of quantum energy eigenstates on graphic cards

    NASA Astrophysics Data System (ADS)

    Rodrigues, Walter; Pecchia, A.; Lopez, M.; Auf der Maur, M.; Di Carlo, A.

    2014-10-01

    Electronic properties of nanoscale materials require the calculation of eigenvalues and eigenvectors of large matrices. This bottleneck can be overcome by parallel computing techniques or the introduction of faster algorithms. In this paper we report a custom implementation of the Lanczos algorithm with simple restart, optimized for graphical processing units (GPUs). The whole algorithm has been developed using CUDA and runs entirely on the GPU, with a specialized implementation that spares memory and reduces at most machine-to-device data transfers. Furthermore parallel distribution over several GPUs has been attained using the standard message passing interface (MPI). Benchmark calculations performed on a GaN/AlGaN wurtzite quantum dot with up to 600,000 atoms are presented. The empirical tight-binding (ETB) model with an sp3d5s∗+spin-orbit parametrization has been used to build the system Hamiltonian (H).

  18. Specification and testing for power by wire aircraft

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.; Kenney, Barbara H.

    1993-01-01

    A power by wire aircraft is one in which all active functions other than propulsion are implemented electrically. Other nomenclature are 'all electric airplane,' or 'more electric airplane.' What is involved is the task of developing and certifying electrical equipment to replace existing hydraulics and pneumatics. When such functions, however, are primary flight controls which are implemented electrically, new requirements are imposed that were not anticipated by existing power system designs. Standards of particular impact are the requirements of ultra-high reliability, high peak transient bi-directional power flow, and immunity to electromagnetic interference and lightning. Not only must the electromagnetic immunity of the total system be verifiable, but box level tests and meaningful system models must be established to allow system evaluation. This paper discusses some of the problems, the system modifications involved, and early results in establishing wiring harness and interface susceptibility requirements.

  19. SToRM: A Model for 2D environmental hydraulics

    USGS Publications Warehouse

    Simões, Francisco J. M.

    2017-01-01

    A two-dimensional (depth-averaged) finite volume Godunov-type shallow water model developed for flow over complex topography is presented. The model, SToRM, is based on an unstructured cell-centered finite volume formulation and on nonlinear strong stability preserving Runge-Kutta time stepping schemes. The numerical discretization is founded on the classical and well established shallow water equations in hyperbolic conservative form, but the convective fluxes are calculated using auto-switching Riemann and diffusive numerical fluxes. Computational efficiency is achieved through a parallel implementation based on the OpenMP standard and the Fortran programming language. SToRM’s implementation within a graphical user interface is discussed. Field application of SToRM is illustrated by utilizing it to estimate peak flow discharges in a flooding event of the St. Vrain Creek in Colorado, U.S.A., in 2013, which reached 850 m3/s (~30,000 f3 /s) at the location of this study.

  20. A System Implementation for Cooperation between UHF RFID Reader and TCP/IP Device

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hoon; Jin, Ik Soo

    This paper presents a system implementation for cooperation between UHF RFID reader and TCP/IP device that can be used as a home gateway. The system consists of an UHF RFID tag, an UHF RFID reader, a RF end-device, a RF coordinator and a TCP/IP I/F. The UHF RFID reader is compatible with EPC Class-0/Gen1, Class-1/Gen1, 2 and ISO18000-6B, operating at the 915MHz. In particular, UHF RFID reader can be combined with a RF end device/coordinator for ZigBee(IEEE 802.15.4) interface which is low power wireless standard. The TCP/IP device is communicated with RFID reader via wired type. On the other hand, it is connected with ZigBee end-device via wireless type. The experimental results show that the developed system can provide the right networking.

Top