Sample records for standard interface file

  1. Standard interface files and procedures for reactor physics codes, version III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, B.M.

    Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)

  2. A database for TMT interface control documents

    NASA Astrophysics Data System (ADS)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  3. Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data

    NASA Technical Reports Server (NTRS)

    Maine, Richard E.

    1987-01-01

    This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.

  4. Flexibility and Performance of Parallel File Systems

    NASA Technical Reports Server (NTRS)

    Kotz, David; Nieuwejaar, Nils

    1996-01-01

    As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.

  5. TADPLOT program, version 2.0: User's guide

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.

    1991-01-01

    The TADPLOT Program, Version 2.0 is described. The TADPLOT program is a software package coordinated by a single, easy-to-use interface, enabling the researcher to access several standard file formats, selectively collect specific subsets of data, and create full-featured publication and viewgraph quality plots. The user-interface was designed to be independent from any file format, yet provide capabilities to accommodate highly specialized data queries. Integrated with an applications software network, data can be assessed, collected, and viewed quickly and easily. Since the commands are data independent, subsequent modifications to the file format will be transparent, while additional file formats can be integrated with minimal impact on the user-interface. The graphical capabilities are independent of the method of data collection; thus, the data specification and subsequent plotting can be modified and upgraded as separate functional components. The graphics kernel selected adheres to the full functional specifications of the CORE standard. Both interface and postprocessing capabilities are fully integrated into TADPLOT.

  6. jmzTab: a java interface to the mzTab data standard.

    PubMed

    Xu, Qing-Wei; Griss, Johannes; Wang, Rui; Jones, Andrew R; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2014-06-01

    mzTab is the most recent standard format developed by the Proteomics Standards Initiative. mzTab is a flexible tab-delimited file that can capture identification and quantification results coming from MS-based proteomics and metabolomics approaches. We here present an open-source Java application programming interface for mzTab called jmzTab. The software allows the efficient processing of mzTab files, providing read and write capabilities, and is designed to be embedded in other software packages. The second key feature of the jmzTab model is that it provides a flexible framework to maintain the logical integrity between the metadata and the table-based sections in the mzTab files. In this article, as two example implementations, we also describe two stand-alone tools that can be used to validate mzTab files and to convert PRIDE XML files to mzTab. The library is freely available at http://mztab.googlecode.com. © 2014 The Authors PROTEOMICS Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. The jmzQuantML programming interface and validator for the mzQuantML data standard.

    PubMed

    Qi, Da; Krishna, Ritesh; Jones, Andrew R

    2014-03-01

    The mzQuantML standard from the HUPO Proteomics Standards Initiative has recently been released, capturing quantitative data about peptides and proteins, following analysis of MS data. We present a Java application programming interface (API) for mzQuantML called jmzQuantML. The API provides robust bridges between Java classes and elements in mzQuantML files and allows random access to any part of the file. The API provides read and write capabilities, and is designed to be embedded in other software packages, enabling mzQuantML support to be added to proteomics software tools (http://code.google.com/p/jmzquantml/). The mzQuantML standard is designed around a multilevel validation system to ensure that files are structurally and semantically correct for different proteomics quantitative techniques. In this article, we also describe a Java software tool (http://code.google.com/p/mzquantml-validator/) for validating mzQuantML files, which is a formal part of the data standard. © 2014 The Authors. Proteomics published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. MPI-IO: A Parallel File I/O Interface for MPI Version 0.3

    NASA Technical Reports Server (NTRS)

    Corbett, Peter; Feitelson, Dror; Hsu, Yarsun; Prost, Jean-Pierre; Snir, Marc; Fineberg, Sam; Nitzberg, Bill; Traversat, Bernard; Wong, Parkson

    1995-01-01

    Thanks to MPI [9], writing portable message passing parallel programs is almost a reality. One of the remaining problems is file I/0. Although parallel file systems support similar interfaces, the lack of a standard makes developing a truly portable program impossible. Further, the closest thing to a standard, the UNIX file interface, is ill-suited to parallel computing. Working together, IBM Research and NASA Ames have drafted MPI-I0, a proposal to address the portable parallel I/0 problem. In a nutshell, this proposal is based on the idea that I/0 can be modeled as message passing: writing to a file is like sending a message, and reading from a file is like receiving a message. MPI-IO intends to leverage the relatively wide acceptance of the MPI interface in order to create a similar I/0 interface. The above approach can be materialized in different ways. The current proposal represents the result of extensive discussions (and arguments), but is by no means finished. Many changes can be expected as additional participants join the effort to define an interface for portable I/0. This document is organized as follows. The remainder of this section includes a discussion of some issues that have shaped the style of the interface. Section 2 presents an overview of MPI-IO as it is currently defined. It specifies what the interface currently supports and states what would need to be added to the current proposal to make the interface more complete and robust. The next seven sections contain the interface definition itself. Section 3 presents definitions and conventions. Section 4 contains functions for file control, most notably open. Section 5 includes functions for independent I/O, both blocking and nonblocking. Section 6 includes functions for collective I/O, both blocking and nonblocking. Section 7 presents functions to support system-maintained file pointers, and shared file pointers. Section 8 presents constructors that can be used to define useful filetypes (the role of filetypes is explained in Section 2 below). Section 9 presents how the error handling mechanism of MPI is supported by the MPI-IO interface. All this is followed by a set of appendices, which contain information about issues that have not been totally resolved yet, and about design considerations. The reader can find there the motivation behind some of our design choices. More information on this would definitely be welcome and will be included in a further release of this document. The first appendix contains a description of MPI-I0's 'hints' structure which is used when opening a file. Appendix B is a discussion of various issues in the support for file pointers. Appendix C explains what we mean in talking about atomic access. Appendix D provides detailed examples of filetype constructors, and Appendix E contains a collection of arguments for and against various design decisions.

  9. A VHDL Interface for Altera Design Files

    DTIC Science & Technology

    1990-01-01

    this requirement dictated that all prototype products developed during this research would have to mirror standard VHDL code . In fact, the final... product would have to meet the 20 syntactic and semantic requirements of standard VHDL . The coding style used to create the transformation program was the...Transformed Decoder File ....................... 47 C. Supplemental VHDL Package Source Code ........... 54 Altpk.vhd .................................... 54 D

  10. 78 FR 54626 - Announcing Approval of Federal Information Processing Standard (FIPS) Publication 201-2, Personal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-05

    ... its virtual contact interface be made mandatory as soon as possible for the many beneficial features... messaging and the virtual contact interface in the Standard, some Federal departments and agencies have... Laboratory Programs. [FR Doc. 2013-21491 Filed 9-4-13; 8:45 am] BILLING CODE 3510-13-P ...

  11. jmzIdentML API: A Java interface to the mzIdentML standard for peptide and protein identification data.

    PubMed

    Reisinger, Florian; Krishna, Ritesh; Ghali, Fawaz; Ríos, Daniel; Hermjakob, Henning; Vizcaíno, Juan Antonio; Jones, Andrew R

    2012-03-01

    We present a Java application programming interface (API), jmzIdentML, for the Human Proteome Organisation (HUPO) Proteomics Standards Initiative (PSI) mzIdentML standard for peptide and protein identification data. The API combines the power of Java Architecture of XML Binding (JAXB) and an XPath-based random-access indexer to allow a fast and efficient mapping of extensible markup language (XML) elements to Java objects. The internal references in the mzIdentML files are resolved in an on-demand manner, where the whole file is accessed as a random-access swap file, and only the relevant piece of XMLis selected for mapping to its corresponding Java object. The APIis highly efficient in its memory usage and can handle files of arbitrary sizes. The APIfollows the official release of the mzIdentML (version 1.1) specifications and is available in the public domain under a permissive licence at http://www.code.google.com/p/jmzidentml/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Development of the FITS tools package for multiple software environments

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; Blackburn, J. K.

    1992-01-01

    The HEASARC is developing a package of general purpose software for analyzing data files in FITS format. This paper describes the design philosophy which makes the software both machine-independent (it runs on VAXs, Suns, and DEC-stations) and software environment-independent. Currently the software can be compiled and linked to produce IRAF tasks, or alternatively, the same source code can be used to generate stand-alone tasks using one of two implementations of a user-parameter interface library. The machine independence of the software is achieved by writing the source code in ANSI standard Fortran or C, using the machine-independent FITSIO subroutine interface for all data file I/O, and using a standard user-parameter subroutine interface for all user I/O. The latter interface is based on the Fortran IRAF Parameter File interface developed at STScI. The IRAF tasks are built by linking to the IRAF implementation of this parameter interface library. Two other implementations of this parameter interface library, which have no IRAF dependencies, are now available which can be used to generate stand-alone executable tasks. These stand-alone tasks can simply be executed from the machine operating system prompt either by supplying all the task parameters on the command line or by entering the task name after which the user will be prompted for any required parameters. A first release of this FTOOLS package is now publicly available. The currently available tasks are described, along with instructions on how to obtain a copy of the software.

  13. Mission Operations Center (MOC) - Precipitation Processing System (PPS) Interface Software System (MPISS)

    NASA Technical Reports Server (NTRS)

    Ferrara, Jeffrey; Calk, William; Atwell, William; Tsui, Tina

    2013-01-01

    MPISS is an automatic file transfer system that implements a combination of standard and mission-unique transfer protocols required by the Global Precipitation Measurement Mission (GPM) Precipitation Processing System (PPS) to control the flow of data between the MOC and the PPS. The primary features of MPISS are file transfers (both with and without PPS specific protocols), logging of file transfer and system events to local files and a standard messaging bus, short term storage of data files to facilitate retransmissions, and generation of file transfer accounting reports. The system includes a graphical user interface (GUI) to control the system, allow manual operations, and to display events in real time. The PPS specific protocols are an enhanced version of those that were developed for the Tropical Rainfall Measuring Mission (TRMM). All file transfers between the MOC and the PPS use the SSH File Transfer Protocol (SFTP). For reports and data files generated within the MOC, no additional protocols are used when transferring files to the PPS. For observatory data files, an additional handshaking protocol of data notices and data receipts is used. MPISS generates and sends to the PPS data notices containing data start and stop times along with a checksum for the file for each observatory data file transmitted. MPISS retrieves the PPS generated data receipts that indicate the success or failure of the PPS to ingest the data file and/or notice. MPISS retransmits the appropriate files as indicated in the receipt when required. MPISS also automatically retrieves files from the PPS. The unique feature of this software is the use of both standard and PPS specific protocols in parallel. The advantage of this capability is that it supports users that require the PPS protocol as well as those that do not require it. The system is highly configurable to accommodate the needs of future users.

  14. Molray--a web interface between O and the POV-Ray ray tracer.

    PubMed

    Harris, M; Jones, T A

    2001-08-01

    A publicly available web-based interface is presented for producing high-quality ray-traced images and movies from the molecular-modelling program O [Jones et al. (1991), Acta Cryst. A47, 110-119]. The interface allows the user to select O-plot files and set parameters to create standard input files for the popular ray-tracing renderer POV-Ray, which can then produce publication-quality still images or simple movies. To ensure ease of use, we have made this service available to the O user community via the World Wide Web. The public Molray server is available at http://xray.bmc.uu.se/molray.

  15. A generic archive protocol and an implementation

    NASA Technical Reports Server (NTRS)

    Jordan, J. M.; Jennings, D. G.; Mcglynn, T. A.; Ruggiero, N. G.; Serlemitsos, T. A.

    1992-01-01

    Archiving vast amounts of data has become a major part of every scientific space mission today. The Generic Archive/Retrieval Services Protocol (GRASP) addresses the question of how to archive the data collected in an environment where the underlying hardware archives may be rapidly changing. GRASP is a device independent specification defining a set of functions for storing and retrieving data from an archive, as well as other support functions. GRASP is divided into two levels: the Transfer Interface and the Action Interface. The Transfer Interface is computer/archive independent code while the Action Interface contains code which is dedicated to each archive/computer addressed. Implementations of the GRASP specification are currently available for DECstations running Ultrix, Sparcstations running SunOS, and microVAX/VAXstation 3100's. The underlying archive is assumed to function as a standard Unix or VMS file system. The code, written in C, is a single suite of files. Preprocessing commands define the machine unique code sections in the device interface. The implementation was written, to the greatest extent possible, using only ANSI standard C functions.

  16. Military Standard Common APSE (Ada Programming Support Environment) Interface Set (CAIS).

    DTIC Science & Technology

    1985-01-01

    QUEUEASE. LAST-KEY (QUEENAME) . LASTREI.TIONI(QUEUE-NAME). FILE-NODE. PORN . ATTRIBUTTES. ACCESSCONTROL. LEVEL); CLOSE (QUEUE BASE); CLOSE(FILE NODE...PROPOSED XIIT-STD-C.4 31 J NNUAfY logs procedure zTERT (ITERATOR: out NODE ITERATON; MAMIE: NAME STRING.KIND: NODE KID : KEY : RELATIONSHIP KEY PA1TTE1 :R

  17. jmzReader: A Java parser library to process and visualize multiple text and XML-based mass spectrometry data formats.

    PubMed

    Griss, Johannes; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2012-03-01

    We here present the jmzReader library: a collection of Java application programming interfaces (APIs) to parse the most commonly used peak list and XML-based mass spectrometry (MS) data formats: DTA, MS2, MGF, PKL, mzXML, mzData, and mzML (based on the already existing API jmzML). The library is optimized to be used in conjunction with mzIdentML, the recently released standard data format for reporting protein and peptide identifications, developed by the HUPO proteomics standards initiative (PSI). mzIdentML files do not contain spectra data but contain references to different kinds of external MS data files. As a key functionality, all parsers implement a common interface that supports the various methods used by mzIdentML to reference external spectra. Thus, when developing software for mzIdentML, programmers no longer have to support multiple MS data file formats but only this one interface. The library (which includes a viewer) is open source and, together with detailed documentation, can be downloaded from http://code.google.com/p/jmzreader/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A standard format and a graphical user interface for spin system specification.

    PubMed

    Biternas, A G; Charnock, G T P; Kuprov, Ilya

    2014-03-01

    We introduce a simple and general XML format for spin system description that is the result of extensive consultations within Magnetic Resonance community and unifies under one roof all major existing spin interaction specification conventions. The format is human-readable, easy to edit and easy to parse using standard XML libraries. We also describe a graphical user interface that was designed to facilitate construction and visualization of complicated spin systems. The interface is capable of generating input files for several popular spin dynamics simulation packages. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Transferable Output ASCII Data (TOAD) gateway: Version 1.0 user's guide

    NASA Technical Reports Server (NTRS)

    Bingel, Bradford D.

    1991-01-01

    The Transferable Output ASCII Data (TOAD) Gateway, release 1.0 is described. This is a software tool for converting tabular data from one format into another via the TOAD format. This initial release of the Gateway allows free data interchange among the following file formats: TOAD; Standard Interface File (SIF); Program to Optimize Simulated Trajectories (POST) input; Comma Separated Value (TSV); and a general free-form file format. As required, additional formats can be accommodated quickly and easily.

  20. An object oriented fully 3D tomography visual toolkit.

    PubMed

    Agostinelli, S; Paoli, G

    2001-04-01

    In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.

  1. LabVIEW Interface for PCI-SpaceWire Interface Card

    NASA Technical Reports Server (NTRS)

    Lux, James; Loya, Frank; Bachmann, Alex

    2005-01-01

    This software provides a LabView interface to the NT drivers for the PCISpaceWire card, which is a peripheral component interface (PCI) bus interface that conforms to the IEEE-1355/ SpaceWire standard. As SpaceWire grows in popularity, the ability to use SpaceWire links within LabVIEW will be important to electronic ground support equipment vendors. In addition, there is a need for a high-level LabVIEW interface to the low-level device- driver software supplied with the card. The LabVIEW virtual instrument (VI) provides graphical interfaces to support all (1) SpaceWire link functions, including message handling and routing; (2) monitoring as a passive tap using specialized hardware; and (3) low-level access to satellite mission-control subsystem functions. The software is supplied in a zip file that contains LabVIEW VI files, which provide various functions of the PCI-SpaceWire card, as well as higher-link-level functions. The VIs are suitably named according to the matching function names in the driver manual. A number of test programs also are provided to exercise various functions.

  2. Playing the Metadata Game: Technologies and Strategies Used by Climate Diagnostics Center for Cataloging and Distributing Climate Data.

    NASA Astrophysics Data System (ADS)

    Schweitzer, R. H.

    2001-05-01

    The Climate Diagnostics Center maintains a collection of gridded climate data primarily for use by local researchers. Because this data is available on fast digital storage and because it has been converted to netCDF using a standard metadata convention (called COARDS), we recognize that this data collection is also useful to the community at large. At CDC we try to use technology and metadata standards to reduce our costs associated with making these data available to the public. The World Wide Web has been an excellent technology platform for meeting that goal. Specifically we have developed Web-based user interfaces that allow users to search, plot and download subsets from the data collection. We have also been exploring use of the Pacific Marine Environment Laboratory's Live Access Server (LAS) as an engine for this task. This would result in further savings by allowing us to concentrate on customizing the LAS where needed, rather that developing and maintaining our own system. One such customization currently under development is the use of Java Servlets and JavaServer pages in conjunction with a metadata database to produce a hierarchical user interface to LAS. In addition to these Web-based user interfaces all of our data are available via the Distributed Oceanographic Data System (DODS). This allows other sites using LAS and individuals using DODS-enabled clients to use our data as if it were a local file. All of these technology systems are driven by metadata. When we began to create netCDF files, we collaborated with several other agencies to develop a netCDF convention (COARDS) for metadata. At CDC we have extended that convention to incorporate additional metadata elements to make the netCDF files as self-describing as possible. Part of the local metadata is a set of controlled names for the variable, level in the atmosphere and ocean, statistic and data set for each netCDF file. To allow searching and easy reorganization of these metadata, we loaded the metadata from the netCDF files into a mySQL database. The combination of the mySQL database and the controlled names makes it possible to automate the construction of user interfaces and standard format metadata descriptions, like Federal Geographic Data Committee (FGDC) and Directory Interchange Format (DIF). These standard descriptions also include an association between our controlled names and standard keywords such as those developed by the Global Change Master Directory (GCMD). This talk will give an overview of each of these technology and metadata standards as it applies to work at the Climate Diagnostics Center. The talk will also discuss the pros and cons of each approach and discuss areas for future development.

  3. ''Towards a High-Performance and Robust Implementation of MPI-IO on Top of GPFS''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prost, J.P.; Tremann, R.; Blackwore, R.

    2000-01-11

    MPI-IO/GPFS is a prototype implementation of the I/O chapter of the Message Passing Interface (MPI) 2 standard. It uses the IBM General Parallel File System (GPFS), with prototyped extensions, as the underlying file system. this paper describes the features of this prototype which support its high performance and robustness. The use of hints at the file system level and at the MPI-IO level allows tailoring the use of the file system to the application needs. Error handling in collective operations provides robust error reporting and deadlock prevention in case of returning errors.

  4. proBAMconvert: A Conversion Tool for proBAM/proBed.

    PubMed

    Olexiouk, Volodimir; Menschaert, Gerben

    2017-07-07

    The introduction of new standard formats, proBAM and proBed, improves the integration of genomics and proteomics information, thus aiding proteogenomics applications. These novel formats enable peptide spectrum matches (PSM) to be stored, inspected, and analyzed within the context of the genome. However, an easy-to-use and transparent tool to convert mass spectrometry identification files to these new formats is indispensable. proBAMconvert enables the conversion of common identification file formats (mzIdentML, mzTab, and pepXML) to proBAM/proBed using an intuitive interface. Furthermore, ProBAMconvert enables information to be output both at the PSM and peptide levels and has a command line interface next to the graphical user interface. Detailed documentation and a completely worked-out tutorial is available at http://probam.biobix.be .

  5. DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.

    PubMed

    Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques

    2008-09-08

    Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.

  6. SnopViz, an interactive snow profile visualization tool

    NASA Astrophysics Data System (ADS)

    Fierz, Charles; Egger, Thomas; gerber, Matthias; Bavay, Mathias; Techel, Frank

    2016-04-01

    SnopViz is a visualization tool for both simulation outputs of the snow-cover model SNOWPACK and observed snow profiles. It has been designed to fulfil the needs of operational services (Swiss Avalanche Warning Service, Avalanche Canada) as well as offer the flexibility required to satisfy the specific needs of researchers. This JavaScript application runs on any modern browser and does not require an active Internet connection. The open source code is available for download from models.slf.ch where examples can also be run. Both the SnopViz library and the SnopViz User Interface will become a full replacement of the current research visualization tool SN_GUI for SNOWPACK. The SnopViz library is a stand-alone application that parses the provided input files, for example, a single snow profile (CAAML file format) or multiple snow profiles as output by SNOWPACK (PRO file format). A plugin architecture allows for handling JSON objects (JavaScript Object Notation) as well and plugins for other file formats may be added easily. The outputs are provided either as vector graphics (SVG) or JSON objects. The SnopViz User Interface (UI) is a browser based stand-alone interface. It runs in every modern browser, including IE, and allows user interaction with the graphs. SVG, the XML based standard for vector graphics, was chosen because of its easy interaction with JS and a good software support (Adobe Illustrator, Inkscape) to manipulate graphs outside SnopViz for publication purposes. SnopViz provides new visualization for SNOWPACK timeline output as well as time series input and output. The actual output format for SNOWPACK timelines was retained while time series are read from SMET files, a file format used in conjunction with the open source data handling code MeteoIO. Finally, SnopViz is able to render single snow profiles, either observed or modelled, that are provided as CAAML-file. This file format (caaml.org/Schemas/V5.0/Profiles/SnowProfileIACS) is an international standard to exchange snow profile data. It is supported by the International Association of Cryospheric Sciences (IACS) and was developed in collaboration with practitioners (Avalanche Canada).

  7. JPL Space Telecommunications Radio System Operating Environment

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Lang, Minh; Peters, Kenneth J.; Taylor, Gregory H.; Duncan, Courtney B.; Orozco, David S.; Stern, Ryan A.; Ahten, Earl R.; Girard, Mike

    2013-01-01

    A flight-qualified implementation of a Software Defined Radio (SDR) Operating Environment for the JPL-SDR built for the CoNNeCT Project has been developed. It is compliant with the NASA Space Telecommunications Radio System (STRS) Architecture Standard, and provides the software infrastructure for STRS compliant waveform applications. This software provides a standards-compliant abstracted view of the JPL-SDR hardware platform. It uses industry standard POSIX interfaces for most functions, as well as exposing the STRS API (Application Programming In terface) required by the standard. This software includes a standardized interface for IP components instantiated within a Xilinx FPGA (Field Programmable Gate Array). The software provides a standardized abstracted interface to platform resources such as data converters, file system, etc., which can be used by STRS standards conformant waveform applications. It provides a generic SDR operating environment with a much smaller resource footprint than similar products such as SCA (Software Communications Architecture) compliant implementations, or the DoD Joint Tactical Radio Systems (JTRS).

  8. bioalcidae, samjs and vcffilterjs: object-oriented formatters and filters for bioinformatics files.

    PubMed

    Lindenbaum, Pierre; Redon, Richard

    2018-04-01

    Reformatting and filtering bioinformatics files are common tasks for bioinformaticians. Standard Linux tools and specific programs are usually used to perform such tasks but there is still a gap between using these tools and the programming interface of some existing libraries. In this study, we developed a set of tools namely bioalcidae, samjs and vcffilterjs that reformat or filter files using a JavaScript engine or a pure java expression and taking advantage of the java API for high-throughput sequencing data (htsjdk). https://github.com/lindenb/jvarkit. pierre.lindenbaum@univ-nantes.fr.

  9. Nemesis I: Parallel Enhancements to ExodusII

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennigan, Gary L.; John, Matthew S.; Shadid, John N.

    2006-03-28

    NEMESIS I is an enhancement to the EXODUS II finite element database model used to store and retrieve data for unstructured parallel finite element analyses. NEMESIS I adds data structures which facilitate the partitioning of a scalar (standard serial) EXODUS II file onto parallel disk systems found on many parallel computers. Since the NEMESIS I application programming interface (APl)can be used to append information to an existing EXODUS II files can be used on files which contain NEMESIS I information. The NEMESIS I information is written and read via C or C++ callable functions which compromise the NEMESIS I API.

  10. Pycellerator: an arrow-based reaction-like modelling language for biological simulations.

    PubMed

    Shapiro, Bruce E; Mjolsness, Eric

    2016-02-15

    We introduce Pycellerator, a Python library for reading Cellerator arrow notation from standard text files, conversion to differential equations, generating stand-alone Python solvers, and optionally running and plotting the solutions. All of the original Cellerator arrows, which represent reactions ranging from mass action, Michales-Menten-Henri (MMH) and Gene-Regulation (GRN) to Monod-Wyman-Changeaux (MWC), user defined reactions and enzymatic expansions (KMech), were previously represented with the Mathematica extended character set. These are now typed as reaction-like commands in ASCII text files that are read by Pycellerator, which includes a Python command line interface (CLI), a Python application programming interface (API) and an iPython notebook interface. Cellerator reaction arrows are now input in text files. The arrows are parsed by Pycellerator and translated into differential equations in Python, and Python code is automatically generated to solve the system. Time courses are produced by executing the auto-generated Python code. Users have full freedom to modify the solver and utilize the complete set of standard Python tools. The new libraries are completely independent of the old Cellerator software and do not require Mathematica. All software is available (GPL) from the github repository at https://github.com/biomathman/pycellerator/releases. Details, including installation instructions and a glossary of acronyms and terms, are given in the Supplementary information. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  12. NEAMS-IPL MOOSE Midyear Framework Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Permann, Cody; Alger, Brian; Peterson, John

    The MOOSE Framework is a modular pluggable framework for building complex simulations. The ability to add new objects with custom syntax is a core capability that makes MOOSE a powerful platform for coupling multiple applications together within a single environment. The creation of a new, more standardized JSON syntax output improves the external interfaces for generating graphical components or for validating input file syntax. The design of this interface and the requirements it satisfies are covered in this short report.

  13. Storage resource manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Bakken, J.; Petravick, D.

    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less

  14. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  15. Accessing and distributing EMBL data using CORBA (common object request broker architecture).

    PubMed

    Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P

    2000-01-01

    The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.

  16. Accessing and distributing EMBL data using CORBA (common object request broker architecture)

    PubMed Central

    Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip

    2000-01-01

    Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259

  17. 77 FR 40338 - Announcing Revised Draft Federal Information Processing Standard (FIPS) 201-2, Personal Identity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-09

    ... may be sent to: Chief, Computer Security Division, Information Technology Laboratory, ATTN: Comments... introduces the concept of a virtual contact interface, over which all functionality of the PIV Card is... Laboratory Programs. [FR Doc. 2012-16725 Filed 7-6-12; 8:45 am] BILLING CODE 3510-13-P ...

  18. CBrowse: a SAM/BAM-based contig browser for transcriptome assembly visualization and analysis.

    PubMed

    Li, Pei; Ji, Guoli; Dong, Min; Schmidt, Emily; Lenox, Douglas; Chen, Liangliang; Liu, Qi; Liu, Lin; Zhang, Jie; Liang, Chun

    2012-09-15

    To address the impending need for exploring rapidly increased transcriptomics data generated for non-model organisms, we developed CBrowse, an AJAX-based web browser for visualizing and analyzing transcriptome assemblies and contigs. Designed in a standard three-tier architecture with a data pre-processing pipeline, CBrowse is essentially a Rich Internet Application that offers many seamlessly integrated web interfaces and allows users to navigate, sort, filter, search and visualize data smoothly. The pre-processing pipeline takes the contig sequence file in FASTA format and its relevant SAM/BAM file as the input; detects putative polymorphisms, simple sequence repeats and sequencing errors in contigs and generates image, JSON and database-compatible CSV text files that are directly utilized by different web interfaces. CBowse is a generic visualization and analysis tool that facilitates close examination of assembly quality, genetic polymorphisms, sequence repeats and/or sequencing errors in transcriptome sequencing projects. CBrowse is distributed under the GNU General Public License, available at http://bioinfolab.muohio.edu/CBrowse/ liangc@muohio.edu or liangc.mu@gmail.com; glji@xmu.edu.cn Supplementary data are available at Bioinformatics online.

  19. Doing Your Science While You're in Orbit

    NASA Astrophysics Data System (ADS)

    Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.

    2010-11-01

    Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.

  20. Interactive digital signal processor

    NASA Technical Reports Server (NTRS)

    Mish, W. H.; Wenger, R. M.; Behannon, K. W.; Byrnes, J. B.

    1982-01-01

    The Interactive Digital Signal Processor (IDSP) is examined. It consists of a set of time series analysis Operators each of which operates on an input file to produce an output file. The operators can be executed in any order that makes sense and recursively, if desired. The operators are the various algorithms used in digital time series analysis work. User written operators can be easily interfaced to the sysatem. The system can be operated both interactively and in batch mode. In IDSP a file can consist of up to n (currently n=8) simultaneous time series. IDSP currently includes over thirty standard operators that range from Fourier transform operations, design and application of digital filters, eigenvalue analysis, to operators that provide graphical output, allow batch operation, editing and display information.

  1. An Overview of ARL’s Multimodal Signatures Database and Web Interface

    DTIC Science & Technology

    2007-12-01

    ActiveX components, which hindered distribution due to license agreements and run-time license software to use such components. g. Proprietary...Overview The database consists of multimodal signature data files in the HDF5 format. Generally, each signature file contains all the ancillary...only contains information in the database, Web interface, and signature files that is releasable to the public. The Web interface consists of static

  2. Development of an e-VLBI Data Transport Software Suite with VDIF

    NASA Technical Reports Server (NTRS)

    Sekido, Mamoru; Takefuji, Kazuhiro; Kimura, Moritaka; Hobiger, Thomas; Kokado, Kensuke; Nozawa, Kentarou; Kurihara, Shinobu; Shinno, Takuya; Takahashi, Fujinobu

    2010-01-01

    We have developed a software library (KVTP-lib) for VLBI data transmission over the network with the VDIF (VLBI Data Interchange Format), which is the newly proposed standard VLBI data format designed for electronic data transfer over the network. The software package keeps the application layer (VDIF frame) and the transmission layer separate, so that each layer can be developed efficiently. The real-time VLBI data transmission tool sudp-send is an application tool based on the KVTP-lib library. sudp-send captures the VLBI data stream from the VSI-H interface with the K5/VSI PC-board and writes the data to file in standard Linux file format or transmits it to the network using the simple- UDP (SUDP) protocol. Another tool, sudp-recv , receives the data stream from the network and writes the data to file in a specific VLBI format (K5/VSSP, VDIF, or Mark 5B). This software system has been implemented on the Wettzell Tsukuba baseline; evaluation before operational employment is under way.

  3. Magnetic Field Experiment Data Analysis System

    NASA Technical Reports Server (NTRS)

    Holland, D. B.; Zanetti, L. J.; Suther, L. L.; Potemra, T. A.; Anderson, B. J.

    1995-01-01

    The Johns Hopkins University Applied Physics Laboratory (JHU/APL) Magnetic Field Experiment Data Analysis System (MFEDAS) has been developed to process and analyze satellite magnetic field experiment data from the TRIAD, MAGSAT, AMPTE/CCE, Viking, Polar BEAR, DMSP, HILAT, UARS, and Freja satellites. The MFEDAS provides extensive data management and analysis capabilities. The system is based on standard data structures and a standard user interface. The MFEDAS has two major elements: (1) a set of satellite unique telemetry processing programs for uniform and rapid conversion of the raw data to a standard format and (2) the program Magplot which has file handling, data analysis, and data display sections. This system is an example of software reuse, allowing new data sets and software extensions to be added in a cost effective and timely manner. Future additions to the system will include the addition of standard format file import routines, modification of the display routines to use a commercial graphics package based on X-Window protocols, and a generic utility for telemetry data access and conversion.

  4. The HDF Product Designer - Interoperability in the First Mile

    NASA Astrophysics Data System (ADS)

    Lee, H.; Jelenak, A.; Habermann, T.

    2014-12-01

    Interoperable data have been a long-time goal in many scientific communities. The recent growth in analysis, visualization and mash-up applications that expect data stored in a standardized manner has brought the interoperability issue to the fore. On the other hand, producing interoperable data is often regarded as a sideline task in a typical research team for which resources are not readily available. The HDF Group is developing a software tool aimed at lessening the burden of creating data in standards-compliant, interoperable HDF5 files. The tool, named HDF Product Designer, lowers the threshold needed to design such files by providing a user interface that combines the rich HDF5 feature set with applicable metadata conventions. Users can quickly devise new HDF5 files while at the same time seamlessly incorporating the latest best practices and conventions from their community. That is what the term interoperability in the first mile means: enabling generation of interoperable data in HDF5 files from the onset of their production. The tool also incorporates collaborative features, allowing team approach in the file design, as well as easy transfer of best practices as they are being developed. The current state of the tool and the plans for future development will be presented. Constructive input from interested parties is always welcome.

  5. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  6. A program to generate a Fortran interface for a C++ library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Lee

    Shroud is a utility to create a Fortran and C interface for a C++ library. An existing C++ library API is described in an input file. Shroud reads the file and creates source files which can be compiled to provide a Fortran API for the library.

  7. Input data requirements for special processors in the computation system containing the VENTURE neutronics code. [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1979-07-01

    User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated.

  8. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  9. IGES transformer and NURBS in grid generation

    NASA Technical Reports Server (NTRS)

    Yu, Tzu-Yi; Soni, Bharat K.

    1993-01-01

    In the field of Grid Generation and the CAD/CAM, there are numerous geometry output formats which require the designer to spend a great deal of time manipulating geometrical entities in order to achieve a useful sculptured geometrical description for grid generation. Also in this process, there is a danger of losing fidelity of the geometry under consideration. This stresses the importance of a standard geometry definition for the communication link between varying CAD/CAM and grid system. The IGES (Initial Graphics Exchange Specification) file is a widely used communication between CAD/CAM and the analysis tools. The scientists at NASA Research Centers - including NASA Ames, NASA Langley, NASA Lewis, NASA Marshall - have recognized this importance and, therefore, in 1992 they formed the committee of the 'NASA-IGES' which is the subset of the standard IGES. This committee stresses the importance and encourages the CFD community to use the standard IGES file for the interface between the CAD/CAM and CFD analysis. Also, two of the IGES entities -- the NURBS Curve (Entity 126) and NURBS Surface (Entity 128) -- which have many useful geometric properties -- like the convex hull property, local control property and affine invariance, also widely utilized analytical geometries can be accurately represented using NURBS. This is important in today grid generation tools because of the emphasis of the interactive design. To satisfy the geometry transformation between the CAD/CAM system and Grid Generation field, the CAGI (Computer Aided Geometry Design) developed, which include the Geometry Transformation, Geometry Manipulation and Geometry Generation as well as the user interface. This paper will present the successful development IGES file transformer and application of NURBS definition in the grid generation.

  10. SPINS: standardized protein NMR storage. A data dictionary and object-oriented relational database for archiving protein NMR spectra.

    PubMed

    Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T

    2002-10-01

    Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.

  11. A tutorial for software development in quantitative proteomics using PSI standard formats☆

    PubMed Central

    Gonzalez-Galarza, Faviel F.; Qi, Da; Fan, Jun; Bessant, Conrad; Jones, Andrew R.

    2014-01-01

    The Human Proteome Organisation — Proteomics Standards Initiative (HUPO-PSI) has been working for ten years on the development of standardised formats that facilitate data sharing and public database deposition. In this article, we review three HUPO-PSI data standards — mzML, mzIdentML and mzQuantML, which can be used to design a complete quantitative analysis pipeline in mass spectrometry (MS)-based proteomics. In this tutorial, we briefly describe the content of each data model, sufficient for bioinformaticians to devise proteomics software. We also provide guidance on the use of recently released application programming interfaces (APIs) developed in Java for each of these standards, which makes it straightforward to read and write files of any size. We have produced a set of example Java classes and a basic graphical user interface to demonstrate how to use the most important parts of the PSI standards, available from http://code.google.com/p/psi-standard-formats-tutorial. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23584085

  12. The Evolvable Advanced Multi-Mission Operations System (AMMOS): Making Systems Interoperable

    NASA Technical Reports Server (NTRS)

    Ko, Adans Y.; Maldague, Pierre F.; Bui, Tung; Lam, Doris T.; McKinney, John C.

    2010-01-01

    The Advanced Multi-Mission Operations System (AMMOS) provides a common Mission Operation System (MOS) infrastructure to NASA deep space missions. The evolution of AMMOS has been driven by two factors: increasingly challenging requirements from space missions, and the emergence of new IT technology. The work described in this paper focuses on three key tasks related to IT technology requirements: first, to eliminate duplicate functionality; second, to promote the use of loosely coupled application programming interfaces, text based file interfaces, web-based frameworks and integrated Graphical User Interfaces (GUI) to connect users, data, and core functionality; and third, to build, develop, and deploy AMMOS services that are reusable, agile, adaptive to project MOS configurations, and responsive to industrially endorsed information technology standards.

  13. Radar Unix: a complete package for GPR data processing

    NASA Astrophysics Data System (ADS)

    Grandjean, Gilles; Durand, Herve

    1999-03-01

    A complete package for ground penetrating radar data interpretation including data processing, forward modeling and a case history database consultation is presented. Running on an Unix operating system, its architecture consists of a graphical user interface generating batch files transmitted to a library of processing routines. This design allows a better software maintenance and the possibility for the user to run processing or modeling batch files by itself and differed in time. A case history data base is available and consists of an hypertext document which can be consulted by using a standard HTML browser. All the software specifications are presented through a realistic example.

  14. MAPA: Implementation of the Standard Interchange Format and use for analyzing lattices

    NASA Astrophysics Data System (ADS)

    Shasharina, Svetlana G.; Cary, John R.

    1997-05-01

    MAPA (Modular Accelerator Physics Analysis) is an object oriented application for accelerator design and analysis with a Motif based graphical user interface. MAPA has been ported to AIX, Linux, HPUX, Solaris, and IRIX. MAPA provides an intuitive environment for accelerator study and design. The user can bring up windows for fully nonlinear analysis of accelerator lattices in any number of dimensions. The current graphical analysis methods of Lifetime plots and Surfaces of Section have been used to analyze the improved lattice designs of Wan, Cary, and Shasharina (this conference). MAPA can now read and write Standard Interchange Format (MAD) accelerator description files and it has a general graphical user interface for adding, changing, and deleting elements. MAPA's consistency checks prevent deletion of used elements and prevent creation of recursive beam lines. Plans include development of a richer set of modeling tools and the ability to invoke existing modeling codes through the MAPA interface. MAPA will be demonstrated on a Pentium 150 laptop running Linux.

  15. Applying Standard Interfaces to a Process-Control Language

    NASA Technical Reports Server (NTRS)

    Berthold, Richard T.

    2005-01-01

    A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.

  16. Volume serving and media management in a networked, distributed client/server environment

    NASA Technical Reports Server (NTRS)

    Herring, Ralph H.; Tefend, Linda L.

    1993-01-01

    The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.

  17. MarFS, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inman, Jeffrey; Bonnie, David; Broomfield, Matthew

    There is a sea (mar is Spanish for sea) of data out there that needs to be handled efficiently. Object Stores are filling the hole of managing large amounts of data efficiently. However, in many cases, and our HPC case in particular, we need a traditional file (POSIX) interface to this data as HPC I/O models have not moved to object interfaces, such as Amazon S3, CDMI, etc.Eventually Object Store providers may deliver file interfaces to their object stores, but at this point those interfaces are not ready to do the job that we need done. MarFS will glue togethermore » two existing scalable components: a file system's scalable metadata component that provides the file interface; and existing scalable object stores (from one or more providers). There will be utilities to do work that is not critical to be done in real-time so that MarFS can manage the space used by objects and allocated to individual users.« less

  18. NLEdit: A generic graphical user interface for Fortran programs

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.

    1994-01-01

    NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.

  19. Bringing Control System User Interfaces to the Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xihui; Kasemir, Kay

    With the evolution of web based technologies, especially HTML5 [1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY [3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY and provides the convenience of re-using existing OPI files. On the other hand, it uses generic JavaScript and a generic communication mechanism between the web browser andmore » web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA) [4]. It is a protocol that provides efficient control system data communication using WebSocket [5], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. WebPDA is control system independent, potentially supporting any type of control system.« less

  20. An optimal user-interface for EPIMS database conversions and SSQ 25002 EEE parts screening

    NASA Technical Reports Server (NTRS)

    Watson, John C.

    1996-01-01

    The Electrical, Electronic, and Electromechanical (EEE) Parts Information Management System (EPIMS) database was selected by the International Space Station Parts Control Board for providing parts information to NASA managers and contractors. Parts data is transferred to the EPIMS database by converting parts list data to the EP1MS Data Exchange File Format. In general, parts list information received from contractors and suppliers does not convert directly into the EPIMS Data Exchange File Format. Often parts lists use different variable and record field assignments. Many of the EPES variables are not defined in the parts lists received. The objective of this work was to develop an automated system for translating parts lists into the EPIMS Data Exchange File Format for upload into the EPIMS database. Once EEE parts information has been transferred to the EPIMS database it is necessary to screen parts data in accordance with the provisions of the SSQ 25002 Supplemental List of Qualified Electrical, Electronic, and Electromechanical Parts, Manufacturers, and Laboratories (QEPM&L). The SSQ 2S002 standards are used to identify parts which satisfy the requirements for spacecraft applications. An additional objective for this work was to develop an automated system which would screen EEE parts information against the SSQ 2S002 to inform managers of the qualification status of parts used in spacecraft applications. The EPIMS Database Conversion and SSQ 25002 User Interfaces are designed to interface through the World-Wide-Web(WWW)/Internet to provide accessibility by NASA managers and contractors.

  1. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    USGS Publications Warehouse

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  2. A CLOUDY/XSPEC Interface

    NASA Technical Reports Server (NTRS)

    Porter, R. L.; Ferland, G. J.; Kraemer, S. B.; Armentrout, B. K.; Arnaud, K. A.; Turner, T. J.

    2007-01-01

    We discuss new functionality of the spectral simulation code CLOUDY which allows the user to calculate grids with one or more initial parameters varied and formats the predicted spectra in the standard FITS format. These files can then be imported into the x-ray spectral analysis software XSPEC and used as theoretical models for observations. We present and verify a test case. Finally, we consider a few observations and discuss our results.

  3. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  4. SedCT: MATLAB™ tools for standardized and quantitative processing of sediment core computed tomography (CT) data collected using a medical CT scanner

    NASA Astrophysics Data System (ADS)

    Reilly, B. T.; Stoner, J. S.; Wiest, J.

    2017-08-01

    Computed tomography (CT) of sediment cores allows for high-resolution images, three-dimensional volumes, and down core profiles. These quantitative data are generated through the attenuation of X-rays, which are sensitive to sediment density and atomic number, and are stored in pixels as relative gray scale values or Hounsfield units (HU). We present a suite of MATLAB™ tools specifically designed for routine sediment core analysis as a means to standardize and better quantify the products of CT data collected on medical CT scanners. SedCT uses a graphical interface to process Digital Imaging and Communications in Medicine (DICOM) files, stitch overlapping scanned intervals, and create down core HU profiles in a manner robust to normal coring imperfections. Utilizing a random sampling technique, SedCT reduces data size and allows for quick processing on typical laptop computers. SedCTimage uses a graphical interface to create quality tiff files of CT slices that are scaled to a user-defined HU range, preserving the quantitative nature of CT images and easily allowing for comparison between sediment cores with different HU means and variance. These tools are presented along with examples from lacustrine and marine sediment cores to highlight the robustness and quantitative nature of this method.

  5. ISTP CDF Skeleton Editor

    NASA Technical Reports Server (NTRS)

    Chimiak, Reine; Harris, Bernard; Williams, Phillip

    2013-01-01

    Basic Common Data Format (CDF) tools (e.g., cdfedit) provide no specific support for creating International Solar-Terrestrial Physics/Space Physics Data Facility (ISTP/SPDF) standard files. While it is possible for someone who is familiar with the ISTP/SPDF metadata guidelines to create compliant files using just the basic tools, the process is error-prone and unreasonable for someone without ISTP/SPDF expertise. The key problem is the lack of a tool with specific support for creating files that comply with the ISTP/SPDF guidelines. There are basic CDF tools such as cdfedit and skeletoncdf for creating CDF files, but these have no specific support for creating ISTP/ SPDF compliant files. The SPDF ISTP CDF skeleton editor is a cross-platform, Java-based GUI editor program that allows someone with only a basic understanding of the ISTP/SPDF guidelines to easily create compliant files. The editor is a simple graphical user interface (GUI) application for creating and editing ISTP/SPDF guideline-compliant skeleton CDF files. The SPDF ISTP CDF skeleton editor consists of the following components: A swing-based Java GUI program, JavaHelp-based manual/ tutorial, Image/Icon files, and HTML Web page for distribution. The editor is available as a traditional Java desktop application as well as a Java Network Launching Protocol (JNLP) application. Once started, it functions like a typical Java GUI file editor application for creating/editing application-unique files.

  6. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  7. NJE; VAX-VMS IBM NJE network protocol emulator. [DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Raffenetti, C.

    NJE is communications software developed to enable a VAX VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE supports job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any network node for printing, punching, or job submission, or to a VM/370 user's virtual reader. Files sent from the VAXmore » are queued and transmitted asynchronously. No changes are required to the IBM software.DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%); VMS 2.5; VAX11/780 with DUP-11 UNIBUS interface and 9600 baud synchronous modem..« less

  8. Visual Basic VPython Interface: Charged Particle in a Magnetic Field

    NASA Astrophysics Data System (ADS)

    Prayaga, Chandra

    2006-12-01

    A simple Visual Basic (VB) to VPython interface is described and illustrated with the example of a charged particle in a magnetic field. This interface allows data to be passed to Python through a text file read by Python. The first component of the interface is a user-friendly data entry screen designed in VB, in which the user can input values of the charge, mass, initial position and initial velocity of the particle, and the magnetic field. Next, a command button is coded to write these values to a text file. Another command button starts the VPython program, which reads the data from the text file, numerically solves the equation of motion, and provides the 3d graphics animation. Students can use the interface to run the program several times with different data and observe changes in the motion.

  9. "WWW.MDTF.ORG": a World Wide Web forum for developing open-architecture, freely distributed, digital teaching file software by participant consensus.

    PubMed

    Katzman, G L; Morris, D; Lauman, J; Cochella, C; Goede, P; Harnsberger, H R

    2001-06-01

    To foster a community supported evaluation processes for open-source digital teaching file (DTF) development and maintenance. The mechanisms used to support this process will include standard web browsers, web servers, forum software, and custom additions to the forum software to potentially enable a mediated voting protocol. The web server will also serve as a focal point for beta and release software distribution, which is the desired end-goal of this process. We foresee that www.mdtf.org will provide for widespread distribution of open source DTF software that will include function and interface design decisions from community participation on the website forums.

  10. A Development of Lightweight Grid Interface

    NASA Astrophysics Data System (ADS)

    Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.

    2011-12-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  11. PCACE- PERSONAL COMPUTER AIDED CABLING ENGINEERING

    NASA Technical Reports Server (NTRS)

    Billitti, J. W.

    1994-01-01

    A computerized interactive harness engineering program has been developed to provide an inexpensive, interactive system which is designed for learning and using an engineering approach to interconnection systems. PCACE is basically a database system that stores information as files of individual connectors and handles wiring information in circuit groups stored as records. This directly emulates the typical manual engineering methods of data handling, thus making the user interface to the program very natural. Data files can be created, viewed, manipulated, or printed in real time. The printed ouput is in a form ready for use by fabrication and engineering personnel. PCACE also contains a wide variety of error-checking routines including connector contact checks during hardcopy generation. The user may edit existing harness data files or create new files. In creating a new file, the user is given the opportunity to insert all the connector and harness boiler plate data which would be part of a normal connector wiring diagram. This data includes the following: 1) connector reference designator, 2) connector part number, 3) backshell part number, 4) cable reference designator, 5) cable part number, 6) drawing revision, 7) relevant notes, 8) standard wire gauge, and 9) maximum circuit count. Any item except the maximum circuit count may be left blank, and any item may be changed at a later time. Once a file is created and organized, the user is directed to the main menu and has access to the file boiler plate, the circuit wiring records, and the wiring records index list. The organization of a file is such that record zero contains the connector/cable boiler plate, and all other records contain circuit wiring data. Each wiring record will handle a circuit with as many as nine wires in the interface. The record stores the circuit name and wire count and the following data for each wire: 1) wire identifier, 2) contact, 3) splice, 4) wire gauge if different from standard, 5) wire/group type, 6) wire destination, and 7) note number. The PCACE record structure allows for a wide variety of wiring forms using splices and shields, yet retains sufficient structure to maintain ease of use. PCACE is written in TURBO Pascal 3.0 and has been implemented on IBM PC, XT, and AT systems under DOS 3.1 with a memory of 512K of 8 bit bytes, two floppy disk drives, an RGB monitor, and a printer with ASCII control characters. PCACE was originally developed in 1983, and the IBM version was released in 1986.

  12. BladeCAD: An Interactive Geometric Design Tool for Turbomachinery Blades

    NASA Technical Reports Server (NTRS)

    Miller, Perry L., IV; Oliver, James H.; Miller, David P.; Tweedt, Daniel L.

    1996-01-01

    A new metthodology for interactive design of turbomachinery blades is presented. Software implementation of the meth- ods provides a user interface that is intuitive to aero-designers while operating with standardized geometric forms. The primary contribution is that blade sections may be defined with respect to general surfaces of revolution which may be defined to represent the path of fluid flow through the turbomachine. The completed blade design is represented as a non-uniform rational B-spline (NURBS) surface and is written to a standard IGES file which is portable to most design, analysis, and manufacturing applications.

  13. An expert system shell for inferring vegetation characteristics: Changes to the historical cover type database (Task F)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    All the options in the NASA VEGetation Workbench (VEG) make use of a database of historical cover types. This database contains results from experiments by scientists on a wide variety of different cover types. The learning system uses the database to provide positive and negative training examples of classes that enable it to learn distinguishing features between classes of vegetation. All the other VEG options use the database to estimate the error bounds involved in the results obtained when various analysis techniques are applied to the sample of cover type data that is being studied. In the previous version of VEG, the historical cover type database was stored as part of the VEG knowledge base. This database was removed from the knowledge base. It is now stored as a series of flat files that are external to VEG. An interface between VEG and these files was provided. The interface allows the user to select which files of historical data to use. The files are then read, and the data are stored in Knowledge Engineering Environment (KEE) units using the same organization of units as in the previous version of VEG. The interface also allows the user to delete some or all of the historical database units from VEG and load new historical data from a file. This report summarizes the use of the historical cover type database in VEG. It then describes the new interface to the files containing the historical data. It describes minor changes that were made to VEG to enable the externally stored database to be used. Test runs to test the operation of the new interface and also to test the operation of VEG using historical data loaded from external files are described. Task F was completed. A Sun cartridge tape containing the KEE and Common Lisp code for the new interface and the modified version of the VEG knowledge base was delivered to the NASA GSFC technical representative.

  14. CGNS Mid-Level Software Library and Users Guide

    NASA Technical Reports Server (NTRS)

    Poirier, Diane; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: - The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; - The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; - The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and - The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The CGNS Mid-level Library was designed to ease the implementation of CGNS by providing developers with a collection of handy I/O functions. Since knowledge of the ADF core is not required to use this library, it will greatly facilitate the task of interfacing with CGNS. There are currently 48 user callable functions that comprise the Mid-level library and are described in the Users Guide. The library is written in C, but each function has a FORTRAN counterpart.

  15. Recommendations for a service framework to access astronomical archives

    NASA Technical Reports Server (NTRS)

    Travisano, J. J.; Pollizzi, J.

    1992-01-01

    There are a large number of astronomical archives and catalogs on-line for network access, with many different user interfaces and features. Some systems are moving towards distributed access, supplying users with client software for their home sites which connects to servers at the archive site. Many of the issues involved in defining a standard framework of services that archive/catalog suppliers can use to achieve a basic level of interoperability are described. Such a framework would simplify the development of client and server programs to access the wide variety of astronomical archive systems. The primary services that are supplied by current systems include: catalog browsing, dataset retrieval, name resolution, and data analysis. The following issues (and probably more) need to be considered in establishing a standard set of client/server interfaces and protocols: Archive Access - dataset retrieval, delivery, file formats, data browsing, analysis, etc.; Catalog Access - database management systems, query languages, data formats, synchronous/asynchronous mode of operation, etc.; Interoperability - transaction/message protocols, distributed processing mechanisms (DCE, ONC/SunRPC, etc), networking protocols, etc.; Security - user registration, authorization/authentication mechanisms, etc.; Service Directory - service registration, lookup, port/task mapping, parameters, etc.; Software - public vs proprietary, client/server software, standard interfaces to client/server functions, software distribution, operating system portability, data portability, etc. Several archive/catalog groups, notably the Astrophysics Data System (ADS), are already working in many of these areas. In the process of developing StarView, which is the user interface to the Space Telescope Data Archive and Distribution Service (ST-DADS), these issues and the work of others were analyzed. A framework of standard interfaces for accessing services on any archive system which would benefit archive user and supplier alike is proposed.

  16. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Hatziminaoglou, Evanthia; Chéreau, Fabien

    2009-03-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility (SAF) developed in the Virtual Observatory Project Office. VirGO enables astronomers to discover and select data easily from millions of observations in a visual and intuitive way. It allows real-time access and the graphical display of a large number of observations by showing instrumental footprints and image previews, as well as their selection and filtering for subsequent download from the ESO SAF web interface. It also permits the loading of external FITS files or VOTables, as well as the superposition of Digitized Sky Survey images to be used as background. All data interfaces are based on Virtual Observatory (VO) standards that allow access to images and spectra from external data centres, and interaction with the ESO SAF web interface or any other VO applications.

  17. CHIMERA II - A real-time multiprocessing environment for sensor-based robot control

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1989-01-01

    A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.

  18. Understanding and Analyzing Latency of Near Real-time Satellite Data

    NASA Astrophysics Data System (ADS)

    Han, W.; Jochum, M.; Brust, J.

    2016-12-01

    Acquiring and disseminating time-sensitive satellite data in a timely manner is much concerned by researchers and decision makers of weather forecast, severe weather warning, disaster and emergency response, environmental monitoring, and so on. Understanding and analyzing the latency of near real-time satellite data is very useful and helpful to explore the whole data transmission flow, indentify the possible issues, and connect data providers and users better. The STAR (Center for Satellite Applications and Research of NOAA) Central Data Repository (SCDR) is a central repository to acquire, manipulate, and disseminate various types of near real-time satellite datasets to internal and external users. In this system, important timestamps, including observation beginning/end, processing, uploading, downloading, and ingestion, are retrieved and organized in the database, so the time length of each transmission phase can be figured out easily. Open source NoSQL database MongoDB is selected to manage the timestamp information because of features of dynamic schema, aggregation and data processing. A user-friendly user interface is developed to visualize and characterize the latency interactively. Taking the Himawari-8 HSD (Himawari Standard Data) file as an example, the data transmission phases, including creating HSD file from satellite observation, uploading the file to HimawariCloud, updating file link in the webpage, downloading and ingesting the file to SCDR, are worked out from the above mentioned timestamps. The latencies can be observed by time of period, day of week, or hour of day in chart or table format, and the anomaly latencies can be detected and reported through the user interface. Latency analysis provides data providers and users actionable insight on how to improve the data transmission of near real-time satellite data, and enhance its acquisition and management.

  19. Application Program Interface for the Orion Aerodynamics Database

    NASA Technical Reports Server (NTRS)

    Robinson, Philip E.; Thompson, James

    2013-01-01

    The Application Programming Interface (API) for the Crew Exploration Vehicle (CEV) Aerodynamic Database has been developed to provide the developers of software an easily implemented, fully self-contained method of accessing the CEV Aerodynamic Database for use in their analysis and simulation tools. The API is programmed in C and provides a series of functions to interact with the database, such as initialization, selecting various options, and calculating the aerodynamic data. No special functions (file read/write, table lookup) are required on the host system other than those included with a standard ANSI C installation. It reads one or more files of aero data tables. Previous releases of aerodynamic databases for space vehicles have only included data tables and a document of the algorithm and equations to combine them for the total aerodynamic forces and moments. This process required each software tool to have a unique implementation of the database code. Errors or omissions in the documentation, or errors in the implementation, led to a lengthy and burdensome process of having to debug each instance of the code. Additionally, input file formats differ for each space vehicle simulation tool, requiring the aero database tables to be reformatted to meet the tool s input file structure requirements. Finally, the capabilities for built-in table lookup routines vary for each simulation tool. Implementation of a new database may require an update to and verification of the table lookup routines. This may be required if the number of dimensions of a data table exceeds the capability of the simulation tools built-in lookup routines. A single software solution was created to provide an aerodynamics software model that could be integrated into other simulation and analysis tools. The highly complex Orion aerodynamics model can then be quickly included in a wide variety of tools. The API code is written in ANSI C for ease of portability to a wide variety of systems. The input data files are in standard formatted ASCII, also for improved portability. The API contains its own implementation of multidimensional table reading and lookup routines. The same aerodynamics input file can be used without modification on all implementations. The turnaround time from aerodynamics model release to a working implementation is significantly reduced

  20. Shuttle-Data-Tape XML Translator

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.

  1. Owgis 2.0: Open Source Java Application that Builds Web GIS Interfaces for Desktop Andmobile Devices

    NASA Astrophysics Data System (ADS)

    Zavala Romero, O.; Chassignet, E.; Zavala-Hidalgo, J.; Pandav, H.; Velissariou, P.; Meyer-Baese, A.

    2016-12-01

    OWGIS is an open source Java and JavaScript application that builds easily configurable Web GIS sites for desktop and mobile devices. The current version of OWGIS generates mobile interfaces based on HTML5 technology and can be used to create mobile applications. The style of the generated websites can be modified using COMPASS, a well known CSS Authoring Framework. In addition, OWGIS uses several Open Geospatial Consortium standards to request datafrom the most common map servers, such as GeoServer. It is also able to request data from ncWMS servers, allowing the websites to display 4D data from NetCDF files. This application is configured by XML files that define which layers, geographic datasets, are displayed on the Web GIS sites. Among other features, OWGIS allows for animations; streamlines from vector data; virtual globe display; vertical profiles and vertical transects; different color palettes; the ability to download data; and display text in multiple languages. OWGIS users are mainly scientists in the oceanography, meteorology and climate fields.

  2. An expert system shell for inferring vegetation characteristics

    NASA Technical Reports Server (NTRS)

    Harrison, P. Ann; Harrison, Patrick R.

    1993-01-01

    The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. VEG is described in detail in several references. The first generation version of VEG was extended. In the first year of this contract, an interface to a file of unknown cover type data was constructed. An interface that allowed the results of VEG to be written to a file was also implemented. A learning system that learned class descriptions from a data base of historical cover type data and then used the learned class descriptions to classify an unknown sample was built. This system had an interface that integrated it into the rest of VEG. The VEG subgoal PROPORTION.GROUND.COVER was completed and a number of additional techniques that inferred the proportion ground cover of a sample were implemented. This work was previously described. The work carried out in the second year of the contract is described. The historical cover type database was removed from VEG and stored as a series of flat files that are external to VEG. An interface to the files was provided. The framework and interface for two new VEG subgoals that estimate the atmospheric effect on reflectance data were built. A new interface that allows the scientist to add techniques to VEG without assistance from the developer was designed and implemented. A prototype Help System that allows the user to get more information about each screen in the VEG interface was also added to VEG.

  3. Master Metadata Repository and Metadata-Management System

    NASA Technical Reports Server (NTRS)

    Armstrong, Edward; Reed, Nate; Zhang, Wen

    2007-01-01

    A master metadata repository (MMR) software system manages the storage and searching of metadata pertaining to data from national and international satellite sources of the Global Ocean Data Assimilation Experiment (GODAE) High Resolution Sea Surface Temperature Pilot Project [GHRSSTPP]. These sources produce a total of hundreds of data files daily, each file classified as one of more than ten data products representing global sea-surface temperatures. The MMR is a relational database wherein the metadata are divided into granulelevel records [denoted file records (FRs)] for individual satellite files and collection-level records [denoted data set descriptions (DSDs)] that describe metadata common to all the files from a specific data product. FRs and DSDs adhere to the NASA Directory Interchange Format (DIF). The FRs and DSDs are contained in separate subdatabases linked by a common field. The MMR is configured in MySQL database software with custom Practical Extraction and Reporting Language (PERL) programs to validate and ingest the metadata records. The database contents are converted into the Federal Geographic Data Committee (FGDC) standard format by use of the Extensible Markup Language (XML). A Web interface enables users to search for availability of data from all sources.

  4. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less

  5. 78 FR 79434 - Notice of Technical Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-30

    ...: one that will allow EQR users to file through a web interface on the Commission's Web site, and a... the conference, staff will demonstrate how to make a filing using both the XML and web interface... Calendar of Events on the Commission's Web site, www.ferc.gov . A free webcast of the conference will be...

  6. A mass spectrometry proteomics data management platform.

    PubMed

    Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael

    2012-09-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.

  7. Profex: a graphical user interface for the Rietveld refinement program BGMN.

    PubMed

    Doebelin, Nicola; Kleeberg, Reinhard

    2015-10-01

    Profex is a graphical user interface for the Rietveld refinement program BGMN . Its interface focuses on preserving BGMN 's powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems.

  8. Profex: a graphical user interface for the Rietveld refinement program BGMN

    PubMed Central

    Doebelin, Nicola; Kleeberg, Reinhard

    2015-01-01

    Profex is a graphical user interface for the Rietveld refinement program BGMN. Its interface focuses on preserving BGMN’s powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems. PMID:26500466

  9. A two-way interface between limited Systems Biology Markup Language and R.

    PubMed

    Radivoyevitch, Tomas

    2004-12-07

    Systems Biology Markup Language (SBML) is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML() which maps this R model structure to SBML level 2, and read.SBML() which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  10. A two-way interface between limited Systems Biology Markup Language and R

    PubMed Central

    Radivoyevitch, Tomas

    2004-01-01

    Background Systems Biology Markup Language (SBML) is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. Results A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML() which maps this R model structure to SBML level 2, and read.SBML() which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. Conclusions List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted. PMID:15585059

  11. SIDS-toADF File Mapping Manual

    NASA Technical Reports Server (NTRS)

    McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)

    2002-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of universal software to read and write the data.

  12. Filtering NetCDF Files by Using the EverVIEW Slice and Dice Tool

    USGS Publications Warehouse

    Conzelmann, Craig; Romañach, Stephanie S.

    2010-01-01

    Network Common Data Form (NetCDF) is a self-describing, machine-independent file format for storing array-oriented scientific data. It was created to provide a common interface between applications and real-time meteorological and other scientific data. Over the past few years, there has been a growing movement within the community of natural resource managers in The Everglades, Fla., to use NetCDF as the standard data container for datasets based on multidimensional arrays. As a consequence, a need surfaced for additional tools to view and manipulate NetCDF datasets, specifically to filter the files by creating subsets of large NetCDF files. The U.S. Geological Survey (USGS) and the Joint Ecosystem Modeling (JEM) group are working to address these needs with applications like the EverVIEW Slice and Dice Tool, which allows users to filter grid-based NetCDF files, thus targeting those data most important to them. The major functions of this tool are as follows: (1) to create subsets of NetCDF files temporally, spatially, and by data value; (2) to view the NetCDF data in table form; and (3) to export the filtered data to a comma-separated value (CSV) file format. The USGS and JEM will continue to work with scientists and natural resource managers across The Everglades to solve complex restoration problems through technological advances.

  13. BnmrOffice: A Free Software for β-nmr Data Analysis

    NASA Astrophysics Data System (ADS)

    Saadaoui, Hassan

    A data-analysis framework with a graphical user interface (GUI) is developed to analyze β-nmr spectra in an automated and intuitive way. This program, named BnmrOffice is written in C++ and employs the QT libraries and tools for designing the GUI, and the CERN's Minuit optimization routines for minimization. The program runs under multiple platforms, and is available for free under the terms of the GNU GPL standards. The GUI is structured in tabs to search, plot and analyze data, along other functionalities. The user can tweak the minimization options; and fit multiple data files (or runs) using single or global fitting routines with pre-defined or new models. Currently, BnmrOffice reads TRIUMF's MUD data and ASCII files, and can be extended to other formats.

  14. PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC

    USGS Publications Warehouse

    Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.

    1997-01-01

    PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.

  15. MDP: Reliable File Transfer for Space Missions

    NASA Technical Reports Server (NTRS)

    Rash, James; Criscuolo, Ed; Hogie, Keith; Parise, Ron; Hennessy, Joseph F. (Technical Monitor)

    2002-01-01

    This paper presents work being done at NASA/GSFC by the Operating Missions as Nodes on the Internet (OMNI) project to demonstrate the application of the Multicast Dissemination Protocol (MDP) to space missions to reliably transfer files. This work builds on previous work by the OMNI project to apply Internet communication technologies to space communication. The goal of this effort is to provide an inexpensive, reliable, standard, and interoperable mechanism for transferring files in the space communication environment. Limited bandwidth, noise, delay, intermittent connectivity, link asymmetry, and one-way links are all possible issues for space missions. Although these are link-layer issues, they can have a profound effect on the performance of transport and application level protocols. MDP, a UDP-based reliable file transfer protocol, was designed for multicast environments which have to address these same issues, and it has done so successfully. Developed by the Naval Research Lab in the mid 1990's, MDP is now in daily use by both the US Post Office and the DoD. This paper describes the use of MDP to provide automated end-to-end data flow for space missions. It examines the results of a parametric study of MDP in a simulated space link environment and discusses the results in terms of their implications for space missions. Lessons learned are addressed, which suggest minor enhancements to the MDP user interface to add specific features for space mission requirements, such as dynamic control of data rate, and a checkpoint/resume capability. These are features that are provided for in the protocol, but are not implemented in the sample MDP application that was provided. A brief look is also taken at the status of standardization. A version of MDP known as NORM (Neck Oriented Reliable Multicast) is in the process of becoming an IETF standard.

  16. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  17. MDplot: Visualise Molecular Dynamics.

    PubMed

    Margreitter, Christian; Oostenbrink, Chris

    2017-05-10

    The MDplot package provides plotting functions to allow for automated visualisation of molecular dynamics simulation output. It is especially useful in cases where the plot generation is rather tedious due to complex file formats or when a large number of plots are generated. The graphs that are supported range from those which are standard, such as RMsD/RMsF (root-mean-square deviation and root-mean-square fluctuation, respectively) to less standard, such as thermodynamic integration analysis and hydrogen bond monitoring over time. All told, they address many commonly used analyses. In this article, we set out the MDplot package's functions, give examples of the function calls, and show the associated plots. Plotting and data parsing is separated in all cases, i.e. the respective functions can be used independently. Thus, data manipulation and the integration of additional file formats is fairly easy. Currently, the loading functions support GROMOS, GROMACS, and AMBER file formats. Moreover, we also provide a Bash interface that allows simple embedding of MDplot into Bash scripts as the final analysis step. The package can be obtained in the latest major version from CRAN (https://cran.r-project.org/package=MDplot) or in the most recent version from the project's GitHub page at https://github.com/MDplot/MDplot, where feedback is also most welcome. MDplot is published under the GPL-3 license.

  18. A Mass Spectrometry Proteomics Data Management Platform*

    PubMed Central

    Sharma, Vagisha; Eng, Jimmy K.; MacCoss, Michael J.; Riffle, Michael

    2012-01-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are “organically” distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/. PMID:22611296

  19. NONROAD2008a Installation and Updates

    EPA Pesticide Factsheets

    NONROAD2008 is the overall set of modeling files including the core model, default data files, graphical user interface (GUI), and reporting utility. NONROAD2008a is essentially the same, but with one correction to the NOx emission factor data file.

  20. Integration experiences and performance studies of A COTS parallel archive systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Bary

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less

  1. Integration experiments and performance studies of a COTS parallel archive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Gary

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less

  2. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  3. The Protein Identifier Cross-Referencing (PICR) service: reconciling protein identifiers across multiple source databases.

    PubMed

    Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning

    2007-10-18

    Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at http://www.ebi.ac.uk/Tools/picr.

  4. The Protein Identifier Cross-Referencing (PICR) service: reconciling protein identifiers across multiple source databases

    PubMed Central

    Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning

    2007-01-01

    Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at . PMID:17945017

  5. [Computerized processing, using a Macintosh, of ambulatory arterial pressure measurements collected on a Spacelabs monitor].

    PubMed

    Chanudet, X; Bauduceau, B; Leguicher, A; Celton, H; Larroque, P

    1987-06-01

    The use of the Spacelabs blood pressure recorder has given rise to processing programs running on Apple II and IBM PC computers. The authors have written in M.S. BASIC (2.1) a program who take advantage of graphic abilities and easy manipulation on Macintosh. The software was designed to perform three tasks: Communicating between Macintosh and Spacelabs station using serial interface (RS 232) without requesting specific interface card. Editing a report on two pages: The first is the listing of 96 measurements (one by 15 minutes). The second provides: patient identification, height, weight, diagnosis. Graphic representation of measurements Systolic and diastolic blood pressure (BP) repartition histogram for 24 hours, day and night. Standard deviation and mean of pressure and heart rate (HR) for those periods. A third optionally gives hourly chronogram and diagrams for cumulated BP and HR. Creating a file: 550 records can be stored on a 800 K floppy disk. The file handles: data for each patient (excepted identification). Random access and revision of each parameter is possible. comparative reports for group patients and patient by patient analysing data with appropriate statistical test (ANOVA and correlation) are done.

  6. phpMs: A PHP-Based Mass Spectrometry Utilities Library.

    PubMed

    Collins, Andrew; Jones, Andrew R

    2018-03-02

    The recent establishment of cloud computing, high-throughput networking, and more versatile web standards and browsers has led to a renewed interest in web-based applications. While traditionally big data has been the domain of optimized desktop and server applications, it is now possible to store vast amounts of data and perform the necessary calculations offsite in cloud storage and computing providers, with the results visualized in a high-quality cross-platform interface via a web browser. There are number of emerging platforms for cloud-based mass spectrometry data analysis; however, there is limited pre-existing code accessible to web developers, especially for those that are constrained to a shared hosting environment where Java and C applications are often forbidden from use by the hosting provider. To remedy this, we provide an open-source mass spectrometry library for one of the most commonly used web development languages, PHP. Our new library, phpMs, provides objects for storing and manipulating spectra and identification data as well as utilities for file reading, file writing, calculations, peptide fragmentation, and protein digestion as well as a software interface for controlling search engines. We provide a working demonstration of some of the capabilities at http://pgb.liv.ac.uk/phpMs .

  7. CAPRI: Using a Geometric Foundation for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2002-01-01

    CAPRI (Computational Analysis Programming Interface) is a software development tool intended to make computerized design, simulation and analysis faster and more efficient. The computational steps traditionally taken for most engineering analysis (Computational Fluid Dynamics (CFD), structural analysis, etc.) are: Surface Generation, usually by employing a Computer Aided Design (CAD) system; Grid Generation, preparing the volume for the simulation; Flow Solver, producing the results at the specified operational point; Post-processing Visualization, interactively attempting to understand the results. It should be noted that the structures problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go IGES files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D) data formats and the upcoming CGNS). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Instead of the serial approach to analysis, CAPRI takes a geometry centric approach. CAPRI is a software building tool-kit that refers to two ideas: (1) A simplified, object-oriented, hierarchical view of a solid part integrating both geometry and topology definitions, and (2) programming access to this part or assembly and any attached data. The connection to the geometry is made through an Application Programming Interface (API) and not a file system.

  8. CytometryML: a markup language for analytical cytology

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.

    2003-06-01

    Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.

  9. Interfacing a high performance disk array file server to a Gigabit LAN

    NASA Technical Reports Server (NTRS)

    Seshan, Srinivasan; Katz, Randy H.

    1993-01-01

    Our previous prototype, RAID-1, identified several bottlenecks in typical file server architectures. The most important bottleneck was the lack of a high-bandwidth path between disk, memory, and the network. Workstation servers, such as the Sun-4/280, have very slow access to peripherals on busses far from the CPU. For the RAID-2 system, we addressed this problem by designing a crossbar interconnect, Xbus board, that provides a 40MB/s path between disk, memory, and the network interfaces. However, this interconnect does not provide the system CPU with low latency access to control the various interfaces. To provide a high data rate to clients on the network, we were forced to carefully and efficiently design the network software. A block diagram of the system hardware architecture is given. In the following subsections, we describe pieces of the RAID-2 file server hardware that had a significant impact on the design of the network interface.

  10. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  11. Virtual Observatory Interfaces to the Chandra Data Archive

    NASA Astrophysics Data System (ADS)

    Tibbetts, M.; Harbo, P.; Van Stone, D.; Zografou, P.

    2014-05-01

    The Chandra Data Archive (CDA) plays a central role in the operation of the Chandra X-ray Center (CXC) by providing access to Chandra data. Proprietary interfaces have been the backbone of the CDA throughout the Chandra mission. While these interfaces continue to provide the depth and breadth of mission specific access Chandra users expect, the CXC has been adding Virtual Observatory (VO) interfaces to the Chandra proposal catalog and observation catalog. VO interfaces provide standards-based access to Chandra data through simple positional queries or more complex queries using the Astronomical Data Query Language. Recent development at the CDA has generalized our existing VO services to create a suite of services that can be configured to provide VO interfaces to any dataset. This approach uses a thin web service layer for the individual VO interfaces, a middle-tier query component which is shared among the VO interfaces for parsing, scheduling, and executing queries, and existing web services for file and data access. The CXC VO services provide Simple Cone Search (SCS), Simple Image Access (SIA), and Table Access Protocol (TAP) implementations for both the Chandra proposal and observation catalogs within the existing archive architecture. Our work with the Chandra proposal and observation catalogs, as well as additional datasets beyond the CDA, illustrates how we can provide configurable VO services to extend core archive functionality.

  12. Designing a data portal for synthesis modeling

    NASA Astrophysics Data System (ADS)

    Holmes, M. A.

    2006-12-01

    Processing of field and model data in multi-disciplinary integrated science studies is a vital part of synthesis modeling. Collection and storage techniques for field data vary greatly between the participating scientific disciplines due to the nature of the data being collected, whether it be in situ, remotely sensed, or recorded by automated data logging equipment. Spreadsheets, personal databases, text files and binary files are used in the initial storage and processing of the raw data. In order to be useful to scientists, engineers and modelers the data need to be stored in a format that is easily identifiable, accessible and transparent to a variety of computing environments. The Model Operations and Synthesis (MOAS) database and associated web portal were created to provide such capabilities. The industry standard relational database is comprised of spatial and temporal data tables, shape files and supporting metadata accessible over the network, through a menu driven web-based portal or spatially accessible through ArcSDE connections from the user's local GIS desktop software. A separate server provides public access to spatial data and model output in the form of attributed shape files through an ArcIMS web-based graphical user interface.

  13. IAC - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. With the goal of supporting the unique needs of engineering analysis groups concerned with interdisciplinary problems, IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a data base, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automatic data transfer among analysis programs. IAC 2.5, designed to be compatible as far as possible with Level 1.5, contains a major upgrade in executive and database management system capabilities, and includes interfaces to enable thermal, structures, optics, and control interaction dynamics analysis. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation interfaces are supplied for building and viewing models. Advanced graphics capabilities are provided within particular analysis modules such as INCA and NASTRAN. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. IAC 2.5 contains several specialized interfaces from NASTRAN in support of multidisciplinary analysis. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. FEMNET, which converts finite element structural analysis models to finite difference thermal analysis models, is also interfaced with the IAC database. 3) System dynamics - The DISCOS simulation program which allows for either nonlinear time domain analysis or linear frequency domain analysis, is fully interfaced to the IAC database management capability. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. Level 2.5 includes EIGEN, which provides tools for large order system eigenanalysis, and BOPACE, which allows for geometric capabilities and finite element analysis with nonlinear material. Also included in IAC level 2.5 is SAMSAN 3.1, an engineering analysis program which contains a general purpose library of over 600 subroutin

  14. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  15. FLASH Interface; a GUI for managing runtime parameters in FLASH simulations

    NASA Astrophysics Data System (ADS)

    Walker, Christopher; Tzeferacos, Petros; Weide, Klaus; Lamb, Donald; Flocke, Norbert; Feister, Scott

    2017-10-01

    We present FLASH Interface, a novel graphical user interface (GUI) for managing runtime parameters in simulations performed with the FLASH code. FLASH Interface supports full text search of available parameters; provides descriptions of each parameter's role and function; allows for the filtering of parameters based on categories; performs input validation; and maintains all comments and non-parameter information already present in existing parameter files. The GUI can be used to edit existing parameter files or generate new ones. FLASH Interface is open source and was implemented with the Electron framework, making it available on Mac OSX, Windows, and Linux operating systems. The new interface lowers the entry barrier for new FLASH users and provides an easy-to-use tool for experienced FLASH simulators. U.S. Department of Energy (DOE), NNSA ASC/Alliances Center for Astrophysical Thermonuclear Flashes, U.S. DOE NNSA ASC through the Argonne Institute for Computing in Science, U.S. National Science Foundation.

  16. Collaborative Data Publication Utilizing the Open Data Repository's (ODR) Data Publisher

    NASA Technical Reports Server (NTRS)

    Stone, N.; Lafuente, B.; Bristow, T.; Keller, R. M.; Downs, R. T.; Blake, D.; Fonda, M.; Dateo, C.; Pires, A.

    2017-01-01

    Introduction: For small communities in diverse fields such as astrobiology, publishing and sharing data can be a difficult challenge. While large, homogenous fields often have repositories and existing data standards, small groups of independent researchers have few options for publishing standards and data that can be utilized within their community. In conjunction with teams at NASA Ames and the University of Arizona, the Open Data Repository's (ODR) Data Publisher has been conducting ongoing pilots to assess the needs of diverse research groups and to develop software to allow them to publish and share their data collaboratively. Objectives: The ODR's Data Publisher aims to provide an easy-to-use and implement software tool that will allow researchers to create and publish database templates and related data. The end product will facilitate both human-readable interfaces (web-based with embedded images, files, and charts) and machine-readable interfaces utilizing semantic standards. Characteristics: The Data Publisher software runs on the standard LAMP (Linux, Apache, MySQL, PHP) stack to provide the widest server base available. The software is based on Symfony (www.symfony.com) which provides a robust framework for creating extensible, object-oriented software in PHP. The software interface consists of a template designer where individual or master database templates can be created. A master database template can be shared by many researchers to provide a common metadata standard that will set a compatibility standard for all derivative databases. Individual researchers can then extend their instance of the template with custom fields, file storage, or visualizations that may be unique to their studies. This allows groups to create compatible databases for data discovery and sharing purposes while still providing the flexibility needed to meet the needs of scientists in rapidly evolving areas of research. Research: As part of this effort, a number of ongoing pilot and test projects are currently in progress. The Astrobiology Habitable Environments Database Working Group is developing a shared database standard using the ODR's Data Publisher and has a number of example databases where astrobiology data are shared. Soon these databases will be integrated via the template-based standard. Work with this group helps determine what data researchers in these diverse fields need to share and archive. Additionally, this pilot helps determine what standards are viable for sharing these types of data from internally developed standards to existing open standards such as the Dublin Core (http://dublincore.org) and Darwin Core (http://rs.twdg.org) metadata standards. Further studies are ongoing with the University of Arizona Department of Geosciences where a number of mineralogy databases are being constructed within the ODR Data Publisher system. Conclusions: Through the ongoing pilots and discussions with individual researchers and small research teams, a definition of the tools desired by these groups is coming into focus. As the software development moves forward, the goal is to meet the publication and collaboration needs of these scientists in an unobtrusive and functional way.

  17. Interfaces between statistical analysis packages and the ESRI geographic information system

    NASA Technical Reports Server (NTRS)

    Masuoka, E.

    1980-01-01

    Interfaces between ESRI's geographic information system (GIS) data files and real valued data files written to facilitate statistical analysis and display of spatially referenced multivariable data are described. An example of data analysis which utilized the GIS and the statistical analysis system is presented to illustrate the utility of combining the analytic capability of a statistical package with the data management and display features of the GIS.

  18. IPeak: An open source tool to combine results from multiple MS/MS search engines.

    PubMed

    Wen, Bo; Du, Chaoqin; Li, Guilin; Ghali, Fawaz; Jones, Andrew R; Käll, Lukas; Xu, Shaohang; Zhou, Ruo; Ren, Zhe; Feng, Qiang; Xu, Xun; Wang, Jun

    2015-09-01

    Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) is an important technique for detecting peptides in proteomics studies. Here, we present an open source software tool, termed IPeak, a peptide identification pipeline that is designed to combine the Percolator post-processing algorithm and multi-search strategy to enhance the sensitivity of peptide identifications without compromising accuracy. IPeak provides a graphical user interface (GUI) as well as a command-line interface, which is implemented in JAVA and can work on all three major operating system platforms: Windows, Linux/Unix and OS X. IPeak has been designed to work with the mzIdentML standard from the Proteomics Standards Initiative (PSI) as an input and output, and also been fully integrated into the associated mzidLibrary project, providing access to the overall pipeline, as well as modules for calling Percolator on individual search engine result files. The integration thus enables IPeak (and Percolator) to be used in conjunction with any software packages implementing the mzIdentML data standard. IPeak is freely available and can be downloaded under an Apache 2.0 license at https://code.google.com/p/mzidentml-lib/. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Software for Managing Personal Files.

    ERIC Educational Resources Information Center

    Lundeen, Gerald

    1989-01-01

    Discusses the special characteristics of personal file management software and compares four microcomputer software packages: Notebook II with Bibliography and Convert, Pro-Cite with Biblio-Links, askSam, and Reference Manager. Each package is evaluated in terms of the user interface, file maintenance, retrieval capabilities, output, and…

  20. Performance of the Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  1. An OpenEarth Framework (OEF) for Integrating and Visualizing Earth Science Data

    NASA Astrophysics Data System (ADS)

    Moreland, J. L.; Nadeau, D. R.; Baru, C.; Crosby, C. J.

    2009-12-01

    The integration of data is essential to make transformative progress in understanding the complex processes operating at the Earth’s surface and within its interior. While our current ability to collect massive amounts of data, develop structural models, and generate high-resolution dynamics models is well developed, our ability to quantitatively integrate these data and models into holistic interpretations of Earth systems is poorly developed. We lack the basic tools to realize a first-order goal in Earth science of developing integrated 4D models of Earth structure and processes using a complete range of available constraints, at a time when the research agenda of major efforts such as EarthScope demand such a capability. Among the challenges to 3D data integration are data that may be in different coordinate spaces, units, value ranges, file formats, and data structures. While several file format standards exist, they are infrequently or incorrectly used. Metadata is often missing, misleading, or relegated to README text files along side the data. This leaves much of the work to integrate data bogged down by simple data management tasks. The OpenEarth Framework (OEF) being developed by GEON addresses these data management difficulties. The software incorporates file format parsers, data interpretation heuristics, user interfaces to prompt for missing information, and visualization techniques to merge data into a common visual model. The OEF’s data access libraries parse formal and de facto standard file formats and map their data into a common data model. The software handles file format quirks, storage details, caching, local and remote file access, and web service protocol handling. Heuristics are used to determine coordinate spaces, units, and other key data features. Where multiple data structure, naming, and file organization conventions exist, those heuristics check for each convention’s use to find a high confidence interpretation of the data. When no convention or embedded data yields a suitable answer, the user is prompted to fill in the blanks. The OEF’s interaction libraries assist in the construction of user interfaces for data management. These libraries support data import, data prompting, data introspection, the management of the contents of a common data model, and the creation of derived data to support visualization. Finally, visualization libraries provide interactive visualization using an extended version of NASA WorldWind. The OEF viewer supports visualization of terrains, point clouds, 3D volumes, imagery, cutting planes, isosurfaces, and more. Data may be color coded, shaded, and displayed above, or below the terrain, and always registered into a common coordinate space. The OEF architecture is open and cross-platform software libraries are available separately for use with other software projects, while modules from other projects may be integrated into the OEF to extend its features. The OEF is currently being used to visualize data from EarthScope-related research in the Western US.

  2. Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In

    NASA Technical Reports Server (NTRS)

    Andres, Paul M.; Lazar, Dennis K.; Thames, Robert Q.

    2013-01-01

    Sally Ride EarthKAM is an educational program funded by NASA that aims to provide the public the ability to picture Earth from the perspective of the International Space Station (ISS). A computer-controlled camera is mounted on the ISS in a nadir-pointing window; however, timing limitations in the system cause inaccurate positional metadata. Manually correcting images within an orbit allows the positional metadata to be improved using mathematical regressions. The manual correction process is time-consuming and thus, unfeasible for a large number of images. The standard Google Earth program allows for the importing of KML (keyhole markup language) files that previously were created. These KML file-based overlays could then be manually manipulated as image overlays, saved, and then uploaded to the project server where they are parsed and the metadata in the database is updated. The new interface eliminates the need to save, download, open, re-save, and upload the KML files. Everything is processed on the Web, and all manipulations go directly into the database. Administrators also have the control to discard any single correction that was made and validate a correction. This program streamlines a process that previously required several critical steps and was probably too complex for the average user to complete successfully. The new process is theoretically simple enough for members of the public to make use of and contribute to the success of the Sally Ride EarthKAM project. Using the Google Earth Web plug-in, EarthKAM images, and associated metadata, this software allows users to interactively manipulate an EarthKAM image overlay, and update and improve the associated metadata. The Web interface uses the Google Earth JavaScript API along with PHP-PostgreSQL to present the user the same interface capabilities without leaving the Web. The simpler graphical user interface will allow the public to participate directly and meaningfully with EarthKAM. The use of similar techniques is being investigated to place ground-based observations in a Google Mars environment, allowing the MSL (Mars Science Laboratory) Science Team a means to visualize the rover and its environment.

  3. Integrated Autonomous Network Management (IANM) Multi-Topology Route Manager and Analyzer

    DTIC Science & Technology

    2008-02-01

    zebra tmg mtrcli xinetd (tftp) mysql configuration file (mtrrm.conf) configuration file (mtrrmAggregator.properties) tftp files /tftpboot NetFlow PDUs...configuration upload/download snmp, telnet OSPFv2 user interface tmg Figure 6-2. Internal software organization Figure 6-2 illustrates the main

  4. Mass storage system reference model, Version 4

    NASA Technical Reports Server (NTRS)

    Coleman, Sam (Editor); Miller, Steve (Editor)

    1993-01-01

    The high-level abstractions that underlie modern storage systems are identified. The information to generate the model was collected from major practitioners who have built and operated large storage facilities, and represents a distillation of the wisdom they have acquired over the years. The model provides a common terminology and set of concepts to allow existing systems to be examined and new systems to be discussed and built. It is intended that the model and the interfaces identified from it will allow and encourage vendors to develop mutually-compatible storage components that can be combined to form integrated storage systems and services. The reference model presents an abstract view of the concepts and organization of storage systems. From this abstraction will come the identification of the interfaces and modules that will be used in IEEE storage system standards. The model is not yet suitable as a standard; it does not contain implementation decisions, such as how abstract objects should be broken up into software modules or how software modules should be mapped to hosts; it does not give policy specifications, such as when files should be migrated; does not describe how the abstract objects should be used or connected; and does not refer to specific hardware components. In particular, it does not fully specify the interfaces.

  5. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  6. MDplot: Visualise Molecular Dynamics

    PubMed Central

    Margreitter, Christian; Oostenbrink, Chris

    2017-01-01

    The MDplot package provides plotting functions to allow for automated visualisation of molecular dynamics simulation output. It is especially useful in cases where the plot generation is rather tedious due to complex file formats or when a large number of plots are generated. The graphs that are supported range from those which are standard, such as RMsD/RMsF (root-mean-square deviation and root-mean-square fluctuation, respectively) to less standard, such as thermodynamic integration analysis and hydrogen bond monitoring over time. All told, they address many commonly used analyses. In this article, we set out the MDplot package′s functions, give examples of the function calls, and show the associated plots. Plotting and data parsing is separated in all cases, i.e. the respective functions can be used independently. Thus, data manipulation and the integration of additional file formats is fairly easy. Currently, the loading functions support GROMOS, GROMACS, and AMBER file formats. Moreover, we also provide a Bash interface that allows simple embedding of MDplot into Bash scripts as the final analysis step. Availability The package can be obtained in the latest major version from CRAN (https://cran.r-project.org/package=MDplot) or in the most recent version from the project′s GitHub page at https://github.com/MDplot/MDplot, where feedback is also most welcome. MDplot is published under the GPL-3 license. PMID:28845302

  7. IAC - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. With the goal of supporting the unique needs of engineering analysis groups concerned with interdisciplinary problems, IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a data base, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automatic data transfer among analysis programs. IAC 2.5, designed to be compatible as far as possible with Level 1.5, contains a major upgrade in executive and database management system capabilities, and includes interfaces to enable thermal, structures, optics, and control interaction dynamics analysis. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation interfaces are supplied for building and viewing models. Advanced graphics capabilities are provided within particular analysis modules such as INCA and NASTRAN. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. IAC 2.5 contains several specialized interfaces from NASTRAN in support of multidisciplinary analysis. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. FEMNET, which converts finite element structural analysis models to finite difference thermal analysis models, is also interfaced with the IAC database. 3) System dynamics - The DISCOS simulation program which allows for either nonlinear time domain analysis or linear frequency domain analysis, is fully interfaced to the IAC database management capability. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. Level 2.5 includes EIGEN, which provides tools for large order system eigenanalysis, and BOPACE, which allows for geometric capabilities and finite element analysis with nonlinear material. Also included in IAC level 2.5 is SAMSAN 3.1, an engineering analysis program which contains a general purpose library of over 600 subroutines for numerical analysis. 5) Graphics - The graphics package IPLOT is included in IAC. IPLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc. Either DI3000 or PLOT-10 graphics software is required for full graphic capability. In addition to these analysis tools, IAC 2.5 contains an IGES interface which allows the user to read arbitrary IGES files into an IAC database and to edit and output new IGES files. IAC is available by license for a period of 10 years to approved U.S. licensees. The licensed program product includes one set of supporting documentation. Additional copies may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The program is structured to allow users to easily delete those program capabilities and "how to" examples they do not want in order to reduce the size of the package. The basic central memory requirement for IAC is approximately 750KB. The following programs are also available from COSMIC as separate packages: NASTRAN, SINDA/SINFLO, TRASYS II, DISCOS, ORACLS, SAMSAN, NBOD2, and INCA. The development of level 2.5 of IAC was completed in 1989.

  8. Optical mass memory system (AMM-13). AMM/DBMS interface control document

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1980-01-01

    The baseline for external interfaces of a 10 to the 13th power bit, optical archival mass memory system (AMM-13) is established. The types of interfaces addressed include data transfer; AMM-13, Data Base Management System, NASA End-to-End Data System computer interconnect; data/control input and output interfaces; test input data source; file management; and facilities interface.

  9. Enabling the democratization of the genomics revolution with a fully integrated web-based bioinformatics platform, Version 1.5 and 1.x.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chain, Patrick; Lo, Chien-Chi; Li, Po-E

    EDGE bioinformatics was developed to help biologists process Next Generation Sequencing data (in the form of raw FASTQ files), even if they have little to no bioinformatics expertise. EDGE is a highly integrated and interactive web-based platform that is capable of running many of the standard analyses that biologists require for viral, bacterial/archaeal, and metagenomic samples. EDGE provides the following analytical workflows: quality trimming and host removal, assembly and annotation, comparisons against known references, taxonomy classification of reads and contigs, whole genome SNP-based phylogenetic analysis, and PCR analysis. EDGE provides an intuitive web-based interface for user input, allows users tomore » visualize and interact with selected results (e.g. JBrowse genome browser), and generates a final detailed PDF report. Results in the form of tables, text files, graphic files, and PDFs can be downloaded. A user management system allows tracking of an individual’s EDGE runs, along with the ability to share, post publicly, delete, or archive their results.« less

  10. Development of a user-friendly system for image processing of electron microscopy by integrating a web browser and PIONE with Eos.

    PubMed

    Tsukamoto, Takafumi; Yasunaga, Takuo

    2014-11-01

    Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Plotting and Analyzing Data Trends in Ternary Diagrams Made Easy

    NASA Astrophysics Data System (ADS)

    John, Cédric M.

    2004-04-01

    Ternary plots are used in many fields of science to characterize a system based on three components. Triangular plotting is thus useful to a broad audience in the Earth sciences and beyond. Unfortunately, it is typically the most expensive commercial software packages that offer the option to plot data in ternary diagrams, and they lack features that are paramount to the geosciences, such as the ability to plot data directly into a standardized diagram and the possibility to analyze temporal and stratigraphic trends within this diagram. To address these issues, δPlot was developed with a strong emphasis on ease of use, community orientation, and availability free of charges. This ``freeware'' supports a fully graphical user interface where data can be imported as text files, or by copying and pasting. A plot is automatically generated, and any standard diagram can be selected for plotting in the background using a simple pull-down menu. Standard diagrams are stored in an external database of PDF files that currently holds some 30 diagrams that deal with different fields of the Earth sciences. Using any drawing software supporting PDF, one can easily produce new standard diagrams to be used with δPlot by simply adding them to the library folder. An independent column of values, commonly stratigraphic depths or ages, can be used to sort the data sets.

  12. Aided generation of search interfaces to astronomical archives

    NASA Astrophysics Data System (ADS)

    Zorba, Sonia; Bignamini, Andrea; Cepparo, Francesco; Knapic, Cristina; Molinaro, Marco; Smareglia, Riccardo

    2016-07-01

    Astrophysical data provider organizations that host web based interfaces to provide access to data resources have to cope with possible changes in data management that imply partial rewrites of web applications. To avoid doing this manually it was decided to develop a dynamically configurable Java EE web application that can set itself up reading needed information from configuration files. Specification of what information the astronomical archive database has to expose is managed using the TAP SCHEMA schema from the IVOA TAP recommendation, that can be edited using a graphical interface. When configuration steps are done the tool will build a war file to allow easy deployment of the application.

  13. PAMLX: a graphical user interface for PAML.

    PubMed

    Xu, Bo; Yang, Ziheng

    2013-12-01

    This note announces pamlX, a graphical user interface/front end for the paml (for Phylogenetic Analysis by Maximum Likelihood) program package (Yang Z. 1997. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 13:555-556; Yang Z. 2007. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 24:1586-1591). pamlX is written in C++ using the Qt library and communicates with paml programs through files. It can be used to create, edit, and print control files for paml programs and to launch paml runs. The interface is available for free download at http://abacus.gene.ucl.ac.uk/software/paml.html.

  14. Exploiting the Multi-Service Domain Protecting Interface

    DTIC Science & Technology

    2012-10-17

    Linux OpenVPN and IPSec VLAN services subsystems. Essentially, MSDPI becomes the transport mechanism for these subsystems. For the RIB, LSP, and...includes those necessary files to build a complete LiveCD system For example, adding various configuration files: ifcfg-eth?, ifcfg-ib?, openvpn ...aka IP address), openvpn files, specific files in the etc/sysconfig directory. %prep %build %install rm -rf $RPM_BUILD_ROOT mkdir -p

  15. The successful implementation of a licensed data management interface between a Sunquest(®) laboratory information system and an AB SCIEX™ mass spectrometer.

    PubMed

    French, Deborah; Terrazas, Enrique

    2013-01-01

    Interfacing complex laboratory equipment to laboratory information systems (LIS) has become a more commonly encountered problem in clinical laboratories, especially for instruments that do not have an interface provided by the vendor. Liquid chromatography-tandem mass spectrometry is a great example of such complex equipment, and has become a frequent addition to clinical laboratories. As the testing volume on such instruments can be significant, manual data entry will also be considerable and the potential for concomitant transcription errors arises. Due to this potential issue, our aim was to interface an AB SCIEX™ mass spectrometer to our Sunquest(®) LIS. WE LICENSED SOFTWARE FOR THE DATA MANAGEMENT INTERFACE FROM THE UNIVERSITY OF PITTSBURGH, BUT EXTENDED THIS WORK AS FOLLOWS: The interface was designed so that it would accept a text file exported from the AB SCIEX™ × 5500 QTrap(®) mass spectrometer, pre-process the file (using newly written code) into the correct format and upload it into Sunquest(®) via file transfer protocol. The licensed software handled the majority of the interface tasks with the exception of converting the output from the Analyst(®) software to the required Sunquest(®) import format. This required writing of a "pre-processor" by one of the authors which was easily integrated with the supplied software. We successfully implemented the data management interface licensed from the University of Pittsburgh. Given the coding that was required to write the pre-processor, and alterations to the source code that were performed when debugging the software, we would suggest that before a laboratory decides to implement such an interface, it would be necessary to have a competent computer programmer available.

  16. Ceramic material life prediction: A program to translate ANSYS results to CARES/LIFE reliability analysis

    NASA Technical Reports Server (NTRS)

    Vonhermann, Pieter; Pintz, Adam

    1994-01-01

    This manual describes the use of the ANSCARES program to prepare a neutral file of FEM stress results taken from ANSYS Release 5.0, in the format needed by CARES/LIFE ceramics reliability program. It is intended for use by experienced users of ANSYS and CARES. Knowledge of compiling and linking FORTRAN programs is also required. Maximum use is made of existing routines (from other CARES interface programs and ANSYS routines) to extract the finite element results and prepare the neutral file for input to the reliability analysis. FORTRAN and machine language routines as described are used to read the ANSYS results file. Sub-element stresses are computed and written to a neutral file using FORTRAN subroutines which are nearly identical to those used in the NASCARES (MSC/NASTRAN to CARES) interface.

  17. XML Translator for Interface Descriptions

    NASA Technical Reports Server (NTRS)

    Boroson, Elizabeth R.

    2009-01-01

    A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.

  18. NASA ARCH- A FILE ARCHIVAL SYSTEM FOR THE DEC VAX

    NASA Technical Reports Server (NTRS)

    Scott, P. J.

    1994-01-01

    The function of the NASA ARCH system is to provide a permanent storage area for files that are infrequently accessed. The NASA ARCH routines were designed to provide a simple mechanism by which users can easily store and retrieve files. The user treats NASA ARCH as the interface to a black box where files are stored. There are only five NASA ARCH user commands, even though NASA ARCH employs standard VMS directives and the VAX BACKUP utility. Special care is taken to provide the security needed to insure file integrity over a period of years. The archived files may exist in any of three storage areas: a temporary buffer, the main buffer, and a magnetic tape library. When the main buffer fills up, it is transferred to permanent magnetic tape storage and deleted from disk. Files may be restored from any of the three storage areas. A single file, multiple files, or entire directories can be stored and retrieved. archived entities hold the same name, extension, version number, and VMS file protection scheme as they had in the user's account prior to archival. NASA ARCH is capable of handling up to 7 directory levels. Wildcards are supported. User commands include TEMPCOPY, DISKCOPY, DELETE, RESTORE, and DIRECTORY. The DIRECTORY command searches a directory of savesets covering all three archival areas, listing matches according to area, date, filename, or other criteria supplied by the user. The system manager commands include 1) ARCHIVE- to transfer the main buffer to duplicate magnetic tapes, 2) REPORTto determine when the main buffer is full enough to archive, 3) INCREMENT- to back up the partially filled main buffer, and 4) FULLBACKUP- to back up the entire main buffer. On-line help files are provided for all NASA ARCH commands. NASA ARCH is written in DEC VAX DCL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.X. This program was developed in 1985.

  19. A New Data Access Mechanism for HDFS

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Sun, Zhenyu; Wei, Zhanchen; Sun, Gongxing

    2017-10-01

    With the era of big data emerging, Hadoop has become the de facto standard of big data processing platform. However, it is still difficult to get legacy applications, such as High Energy Physics (HEP) applications, to run efficiently on Hadoop platform. There are two reasons which lead to the difficulties mentioned above: firstly, random access is not supported on Hadoop File System (HDFS), secondly, it is difficult to make legacy applications adopt to HDFS streaming data processing mode. In order to address the two issues, a new read and write mechanism of HDFS is proposed. With this mechanism, data access is done on the local file system instead of through HDFS streaming interfaces. To enable files modified by users, three attributes including permissions, owner and group are imposed on Block objects. Blocks stored on Datanodes have the same attributes as the file they are owned by. Users can modify blocks when the Map task running locally, and HDFS is responsible to update the rest replicas later after the block modification finished. To further improve the performance of Hadoop system, a complete localization task execution mechanism is implemented for I/O intensive jobs. Test results show that average CPU utilization is improved by 10% with the new task selection strategy, data read and write performances are improved by about 10% and 30% separately.

  20. A bidirectional ACR-NEMA interface between the VA's DHCP Integrated Imaging System and the Siemens-Loral PACS.

    PubMed Central

    Kuzmak, P. M.; Dayhoff, R. E.

    1992-01-01

    There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner. PMID:1482906

  1. A bidirectional ACR-NEMA interface between the VA's DHCP Integrated Imaging System and the Siemens-Loral PACS.

    PubMed

    Kuzmak, P M; Dayhoff, R E

    1992-01-01

    There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner.

  2. A computer program for detailed analysis of the takeoff and approach performance capabilities of transport category aircraft

    NASA Technical Reports Server (NTRS)

    Foss, W. E., Jr.

    1979-01-01

    The takeoff and approach performance of an aircraft is calculated in accordance with the airworthiness standards of the Federal Aviation Regulations. The aircraft and flight constraints are represented in sufficient detail to permit realistic sensitivity studies in terms of either configuration modifications or changes in operational procedures. The program may be used to investigate advanced operational procedures for noise alleviation such as programmed throttle and flap controls. Extensive profile time history data are generated and are placed on an interface file which can be input directly to the NASA aircraft noise prediction program (ANOPP).

  3. Co-PylotDB - A Python-Based Single-Window User Interface for Transmitting Information to a Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel W.

    2012-01-05

    Co-PylotDB, written completely in Python, provides a user interface (UI) with which to select user and data file(s), directories, and file content, and provide or capture various other information for sending data collected from running any computer program to a pre-formatted database table for persistent storage. The interface allows the user to select input, output, make, source, executable, and qsub files. It also provides fields for specifying the machine name on which the software was run, capturing compile and execution lines, and listing relevant user comments. Data automatically captured by Co-PylotDB and sent to the database are user, current directory,more » local hostname, current date, and time of send. The UI provides fields for logging into a local or remote database server, specifying a database and a table, and sending the information to the selected database table. If a server is not available, the UI provides for saving the command that would have saved the information to a database table for either later submission or for sending via email to a collaborator who has access to the desired database.« less

  4. Integrating UniTree with the data migration API

    NASA Technical Reports Server (NTRS)

    Schrodel, David G.

    1994-01-01

    The Data Migration Application Programming Interface (DMAPI) has the potential to allow developers of open systems Hierarchical Storage Management (HSM) products to virtualize native file systems without the requirement to make changes to the underlying operating system. This paper describes advantages of virtualizing native file systems in hierarchical storage management systems, the DMAPI at a high level, what the goals are for the interface, and the integration of the Convex UniTree+HSM with DMAPI along with some of the benefits derived in the resulting product.

  5. Data File Standard for Flow Cytometry, version FCS 3.1.

    PubMed

    Spidlen, Josef; Moore, Wayne; Parks, David; Goldberg, Michael; Bray, Chris; Bierre, Pierre; Gorombey, Peter; Hyun, Bill; Hubbard, Mark; Lange, Simon; Lefebvre, Ray; Leif, Robert; Novo, David; Ostruszka, Leo; Treister, Adam; Wood, James; Murphy, Robert F; Roederer, Mario; Sudar, Damir; Zigon, Robert; Brinkman, Ryan R

    2010-01-01

    The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allows files created by one type of acquisition hardware and software to be analyzed by any other type.The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.

  6. Data File Standard for Flow Cytometry, Version FCS 3.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spidlen, Josef; Moore, Wayne; Parks, David

    2009-11-10

    The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allowsmore » files created by one type of acquisition hardware and software to be analyzed by any other type. The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.« less

  7. The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Schuster, D.; Worley, S. J.

    2013-12-01

    The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is ready. External users are provided with RDA server generated scripts to download the resulting request output. Similarly they can download native dataset collection files or partial files using Wget or cURL based scripts supplied by the RDA server. Internal users can access the resulting request output or native dataset collection files directly from centralized file systems.

  8. Using the K-25 C TD Common File System: A guide to CFSI (CFS Interface)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1989-12-01

    A CFS (Common File System) is a large, centralized file management and storage facility based on software developed at Los Alamos National Laboratory. This manual is a guide to use of the CFS available to users of the Cray UNICOS system at Martin Marietta Energy Systems, Inc., in Oak Ridge, Tennessee.

  9. IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC a

  10. IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC as separate packages: NASTRAN, SINDA/SINFLO, TRASYS II, DISCOS, ORACLS, SAMSAN, NBOD2, and INCA. IAC was developed in 1985.

  11. COMPASS: a suite of pre- and post-search proteomics software tools for OMSSA

    PubMed Central

    Wenger, Craig D.; Phanstiel, Douglas H.; Lee, M. Violet; Bailey, Derek J.; Coon, Joshua J.

    2011-01-01

    Here we present the Coon OMSSA Proteomic Analysis Software Suite (COMPASS): a free and open-source software pipeline for high-throughput analysis of proteomics data, designed around the Open Mass Spectrometry Search Algorithm. We detail a synergistic set of tools for protein database generation, spectral reduction, peptide false discovery rate analysis, peptide quantitation via isobaric labeling, protein parsimony and protein false discovery rate analysis, and protein quantitation. We strive for maximum ease of use, utilizing graphical user interfaces and working with data files in the original instrument vendor format. Results are stored in plain text comma-separated values files, which are easy to view and manipulate with a text editor or spreadsheet program. We illustrate the operation and efficacy of COMPASS through the use of two LC–MS/MS datasets. The first is a dataset of a highly annotated mixture of standard proteins and manually validated contaminants that exhibits the identification workflow. The second is a dataset of yeast peptides, labeled with isobaric stable isotope tags and mixed in known ratios, to demonstrate the quantitative workflow. For these two datasets, COMPASS performs equivalently or better than the current de facto standard, the Trans-Proteomic Pipeline. PMID:21298793

  12. CAPRI (Computational Analysis PRogramming Interface): A Solid Modeling Based Infra-Structure for Engineering Analysis and Design Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Follen, Gregory J.

    1998-01-01

    CAPRI is a CAD-vendor neutral application programming interface designed for the construction of analysis and design systems. By allowing access to the geometry from within all modules (grid generators, solvers and post-processors) such tasks as meshing on the actual surfaces, node enrichment by solvers and defining which mesh faces are boundaries (for the solver and visualization system) become simpler. The overall reliance on file 'standards' is minimized. This 'Geometry Centric' approach makes multi-physics (multi-disciplinary) analysis codes much easier to build. By using the shared (coupled) surface as the foundation, CAPRI provides a single call to interpolate grid-node based data from the surface discretization in one volume to another. Finally, design systems are possible where the results can be brought back into the CAD system (and therefore manufactured) because all geometry construction and modification are performed using the CAD system's geometry kernel.

  13. MarFS-Requirements-Design-Configuration-Admin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kettering, Brett Michael; Grider, Gary Alan

    This document will be organized into sections that are defined by the requirements for a file system that presents a near-POSIX (Portable Operating System Interface) interface to the user, but whose data is stored in whatever form is most efficient for the type of data being stored. After defining the requirement the design for meeting the requirement will be explained. Finally there will be sections on configuring and administering this file system. More and more, data dominates the computing world. There is a “sea” of data out there in many different formats that needs to be managed and used. “Mar”more » means “sea” in Spanish. Thus, this product is dubbed MarFS, a file system for a sea of data.« less

  14. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Chéreau, Fabien

    2012-04-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system.

  15. User interfaces for computational science: A domain specific language for OOMMF embedded in Python

    NASA Astrophysics Data System (ADS)

    Beg, Marijan; Pepper, Ryan A.; Fangohr, Hans

    2017-05-01

    Computer simulations are used widely across the engineering and science disciplines, including in the research and development of magnetic devices using computational micromagnetics. In this work, we identify and review different approaches to configuring simulation runs: (i) the re-compilation of source code, (ii) the use of configuration files, (iii) the graphical user interface, and (iv) embedding the simulation specification in an existing programming language to express the computational problem. We identify the advantages and disadvantages of different approaches and discuss their implications on effectiveness and reproducibility of computational studies and results. Following on from this, we design and describe a domain specific language for micromagnetics that is embedded in the Python language, and allows users to define the micromagnetic simulations they want to carry out in a flexible way. We have implemented this micromagnetic simulation description language together with a computational backend that executes the simulation task using the Object Oriented MicroMagnetic Framework (OOMMF). We illustrate the use of this Python interface for OOMMF by solving the micromagnetic standard problem 4. All the code is publicly available and is open source.

  16. AXAF user interfaces for heterogeneous analysis environments

    NASA Technical Reports Server (NTRS)

    Mandel, Eric; Roll, John; Ackerman, Mark S.

    1992-01-01

    The AXAF Science Center (ASC) will develop software to support all facets of data center activities and user research for the AXAF X-ray Observatory, scheduled for launch in 1999. The goal is to provide astronomers with the ability to utilize heterogeneous data analysis packages, that is, to allow astronomers to pick the best packages for doing their scientific analysis. For example, ASC software will be based on IRAF, but non-IRAF programs will be incorporated into the data system where appropriate. Additionally, it is desired to allow AXAF users to mix ASC software with their own local software. The need to support heterogeneous analysis environments is not special to the AXAF project, and therefore finding mechanisms for coordinating heterogeneous programs is an important problem for astronomical software today. The approach to solving this problem has been to develop two interfaces that allow the scientific user to run heterogeneous programs together. The first is an IRAF-compatible parameter interface that provides non-IRAF programs with IRAF's parameter handling capabilities. Included in the interface is an application programming interface to manipulate parameters from within programs, and also a set of host programs to manipulate parameters at the command line or from within scripts. The parameter interface has been implemented to support parameter storage formats other than IRAF parameter files, allowing one, for example, to access parameters that are stored in data bases. An X Windows graphical user interface called 'agcl' has been developed, layered on top of the IRAF-compatible parameter interface, that provides a standard graphical mechanism for interacting with IRAF and non-IRAF programs. Users can edit parameters and run programs for both non-IRAF programs and IRAF tasks. The agcl interface allows one to communicate with any command line environment in a transparent manner and without any changes to the original environment. For example, the authors routinely layer the GUI on top of IRAF, ksh, SMongo, and IDL. The agcl, based on the facilities of a system called Answer Garden, also has sophisticated support for examining documentation and help files, asking questions of experts, and developing a knowledge base of frequently required information. Thus, the GUI becomes a total environment for running programs, accessing information, examining documents, and finding human assistance. Because the agcl can communicate with any command-line environment, most projects can make use of it easily. New applications are continually being found for these interfaces. It is the authors' intention to evolve the GUI and its underlying parameter interface in response to these needs - from users as well as developers - throughout the astronomy community. This presentation describes the capabilities and technology of the above user interface mechanisms and tools. It also discusses the design philosophies guiding the work, as well as hopes for the future.

  17. VSO For Dummies

    NASA Astrophysics Data System (ADS)

    Schwartz, Richard A.; Zarro, D.; Csillaghy, A.; Dennis, B.; Tolbert, A. K.; Etesi, L.

    2009-05-01

    We report on our activities to integrate VSO search and retrieval capabilities into standard data access, display, and analysis tools. In addition to its standard Web-based search form, the VSO provides an Interactive Data Language (IDL) client (vso_search) that is available through the Solar Software (SSW) package. We have incorporated this client into an IDL-widget interface program (show_synop) that allows for more simplified searching and downloading of VSO datasets directly into a user's IDL data analysis environment. In particular, we have provided the capability to read VSO datasets into a general purpose IDL package (plotman) that can display different datatypes (lightcurves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. Currently, the show_synop tool supports access to ground-based and space-based (SOHO, STEREO, and Hinode) observations, and has the capability to include new datasets as they become available. A user encounters two major hurdles when using the VSO: (1) Instrument-specific software (such as level-0 file readers and data-prepping procedures) may not be available in the user's local SSW distribution. (2) Recent calibration files (such as flat-fields) are not automatically distributed with the analysis software. To address these issues, we have developed a dedicated server (prepserver) that incorporates all the latest instrument-specific software libraries and calibration files. The prepserver uses an IDL-Java bridge to read and implement data processing requests from a client and return a processed data file that can be readily displayed with the show_synop/plotman package. The advantage of the prepserver is that the user is only required to install the general branch (gen) of the SSW tree, and is freed from the more onerous task of installing instrument-specific libraries and calibration files. We will demonstrate how the prepserver can be used to read, process, and overlay SOHO/EIT, TRACE, SECCHI/EUVI, and RHESSI images.

  18. Advancing Collaboration through Hydrologic Data and Model Sharing

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.

    2015-12-01

    HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative's Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called "BagIt". HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare's content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.

  19. An Interface for Specifying Rigid-Body Motions for CFD Applications

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Chan, William; Aftosmis, Michael; Meakin, Robert L.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    An interface for specifying rigid-body motions for CFD applications is presented. This interface provides a means of describing a component hierarchy in a geometric configuration, as well as the motion (prescribed or six-degree-of-freedom) associated with any component. The interface consists of a general set of datatypes, along with rules for their interaction, and is designed to be flexible in order to evolve as future needs dictate. The specification is currently implemented with an XML file format which is portable across platforms and applications. The motion specification is capable of describing general rigid body motions, and eliminates the need to write and compile new code within the application software for each dynamic configuration, allowing client software to automate dynamic simulations. The interface is integrated with a GUI tool which allows rigid body motions to be prescribed and verified interactively, promoting access to non-expert users. Illustrative examples, as well as the raw XML source of the file specifications, are included.

  20. EQS Goes R: Simulations for SEM Using the Package REQS

    ERIC Educational Resources Information Center

    Mair, Patrick; Wu, Eric; Bentler, Peter M.

    2010-01-01

    The REQS package is an interface between the R environment of statistical computing and the EQS software for structural equation modeling. The package consists of 3 main functions that read EQS script files and import the results into R, call EQS script files from R, and run EQS script files from R and import the results after EQS computations.…

  1. 78 FR 11129 - Office of Engineering and Technology Seeks Comment on Updated OET-69 Software

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-15

    ... cell level, and also speeds calculations since the study grid only needs to be established one time... who choose to file by paper must file an original and one copy of each filing. If more than one docket... interface (implemented in Java), used to establish the parameters of the study and which draws data from...

  2. 78 FR 34370 - Revisions to Electric Quarterly Report Filing Process; Notice of Availability of Video Showing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RM12-3-000] Revisions to Electric Quarterly Report Filing Process; Notice of Availability of Video Showing How To File Electric Quarterly Reports Using the Web Interface Take notice that the Federal Energy Regulatory Commission (Commission) is making available on its Web site ...

  3. 76 FR 48925 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-09

    ...-based market making interface on its options trading platform (``NOM''). Market makers use this... in order to offer an additional market making interface choice to NASDAQ market makers. The proposed bulk-quoting market making interface will be used by market makers to submit and update their...

  4. Space Physics Data Facility Web Services

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  5. Developing a Graphical User Interface for the ALSS Crop Planning Tool

    NASA Technical Reports Server (NTRS)

    Koehlert, Erik

    1997-01-01

    The goal of my project was to create a graphical user interface for a prototype crop scheduler. The crop scheduler was developed by Dr. Jorge Leon and Laura Whitaker for the ALSS (Advanced Life Support System) program. The addition of a system-independent graphical user interface to the crop planning tool will make the application more accessible to a wider range of users and enhance its value as an analysis, design, and planning tool. My presentation will demonstrate the form and functionality of this interface. This graphical user interface allows users to edit system parameters stored in the file system. Data on the interaction of the crew, crops, and waste processing system with the available system resources is organized and labeled. Program output, which is stored in the file system, is also presented to the user in performance-time plots and organized charts. The menu system is designed to guide the user through analysis and decision making tasks, providing some help if necessary. The Java programming language was used to develop this interface in hopes of providing portability and remote operation.

  6. Development of a User Interface for a Regression Analysis Software Tool

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    An easy-to -use user interface was implemented in a highly automated regression analysis tool. The user interface was developed from the start to run on computers that use the Windows, Macintosh, Linux, or UNIX operating system. Many user interface features were specifically designed such that a novice or inexperienced user can apply the regression analysis tool with confidence. Therefore, the user interface s design minimizes interactive input from the user. In addition, reasonable default combinations are assigned to those analysis settings that influence the outcome of the regression analysis. These default combinations will lead to a successful regression analysis result for most experimental data sets. The user interface comes in two versions. The text user interface version is used for the ongoing development of the regression analysis tool. The official release of the regression analysis tool, on the other hand, has a graphical user interface that is more efficient to use. This graphical user interface displays all input file names, output file names, and analysis settings for a specific software application mode on a single screen which makes it easier to generate reliable analysis results and to perform input parameter studies. An object-oriented approach was used for the development of the graphical user interface. This choice keeps future software maintenance costs to a reasonable limit. Examples of both the text user interface and graphical user interface are discussed in order to illustrate the user interface s overall design approach.

  7. Multiplexer/Demultiplexer Loading Tool (MDMLT)

    NASA Technical Reports Server (NTRS)

    Brewer, Lenox Allen; Hale, Elizabeth; Martella, Robert; Gyorfi, Ryan

    2012-01-01

    The purpose of the MDMLT is to improve the reliability and speed of loading multiplexers/demultiplexers (MDMs) in the Software Development and Integration Laboratory (SDIL) by automating the configuration management (CM) of the loads in the MDMs, automating the loading procedure, and providing the capability to load multiple or all MDMs concurrently. This loading may be accomplished in parallel, or single MDMs (remote). The MDMLT is a Web-based tool that is capable of loading the entire International Space Station (ISS) MDM configuration in parallel. It is able to load Flight Equivalent Units (FEUs), enhanced, standard, and prototype MDMs as well as both EEPROM (Electrically Erasable Programmable Read-Only Memory) and SSMMU (Solid State Mass Memory Unit) (MASS Memory). This software has extensive configuration management to track loading history, and the performance improvement means of loading the entire ISS MDM configuration of 49 MDMs in approximately 30 minutes, as opposed to 36 hours, which is what it took previously utilizing the flight method of S-Band uplink. The laptop version recently added to the MDMLT suite allows remote lab loading with the CM of information entered into a common database when it is reconnected to the network. This allows the program to reconfigure the test rigs quickly between shifts, allowing the lab to support a variety of onboard configurations during a single day, based on upcoming or current missions. The MDMLT Computer Software Configuration Item (CSCI) supports a Web-based command and control interface to the user. An interface to the SDIL File Transfer Protocol (FTP) server is supported to import Integrated Flight Loads (IFLs) and Internal Product Release Notes (IPRNs) into the database. An interface to the Monitor and Control System (MCS) is supported to control the power state, and to enable or disable the debug port of the MDMs to be loaded. Two direct interfaces to the MDM are supported: a serial interface (debug port) to receive MDM memory dump data and the calculated checksum, and the Small Computer System Interface (SCSI) to transfer load files to MDMs with hard disks. File transfer from the MDM Loading Tool to EEPROM within the MDM is performed via the MILSTD- 1553 bus, making use of the Real- Time Input/Output Processors (RTIOP) when using the rig-based MDMLT, and via a bus box when using the laptop MDMLT. The bus box is a cost-effective alternative to PC-1553 cards for the laptop. It is noted that this system can be modified and adapted to any avionic laboratory for spacecraft computer loading, ship avionics, or aircraft avionics where multiple configurations and strong configuration management of software/firmware loads are required.

  8. NIMBUS 7 Earth Radiation Budget (ERB) Matrix User's Guide. Volume 2: Tape Specifications

    NASA Technical Reports Server (NTRS)

    Ray, S. N.; Vasanth, K. L.

    1984-01-01

    The ERB MATRIX tape is generated by an IBM 3081 computer program and is a 9 track, 1600 BPI tape. The gross format of the tape given on Page 1, shows an initial standard header file followed by data files. The standard header file contains two standard header records. A trailing documentation file (TDF) is the last file on the tape. Pages 9 through 17 describe, in detail, the standard header file and the TDF. The data files contain data for 37 different ERB parameters. Each file has data based on either a daily, 6 day cyclic, or monthly time interval. There are three types of physical records in the data files; namely, the world grid physical record, the documentation mercator/polar map projection physical record, and the monthly calibration physical record. The manner in which the data for the 37 ERB parameters are stored in the physical records comprising the data files, is given in the gross format section.

  9. Portable system to luminaries characterization

    NASA Astrophysics Data System (ADS)

    Tecpoyotl-Torres, M.; Vera-Dimas, J. G.; Koshevaya, S.; Escobedo-Alatorre, J.; Cisneros-Villalobos, L.; Sanchez-Mondragon, J.

    2014-09-01

    For illumination sources designers is important to know the illumination distribution of their products. They can use several viewers of IES files (standard file format determined by Illuminating Engineering Society). This files are necessary not only know the distribution of illumination, but also to plain the construction of buildings by means of specialized softwares, such as Autodesk Revit. In this paper, a complete portable system for luminaries' characterization is given. The components of the systems are: Irradiance profile meter, which can generate photometry of luminaries of small sizes which covers indoor illumination requirements and luminaries for general areas. One of the meteŕs attributes is given by the color sensor implemented, which allows knowing the color temperature of luminary under analysis. The Graphic Unit Interface (GUI) has several characteristics: It can control the meter, acquires the data obtained by the sensor and graphs them in 2D under Cartesian and polar formats or 3D, in Cartesian format. The graph can be exported to png, jpg, or bmp formats, if necessary. These remarkable characteristics differentiate this GUI. This proposal can be considered as a viable option for enterprises of illumination design and manufacturing, due to the relatively low investment level and considering the complete illumination characterization provided.

  10. A standards-based clinical information system for HIV/AIDS.

    PubMed

    Stitt, F W

    1995-01-01

    To create a clinical data repository to interface the Veteran's Administration (VA) Decentralized Hospital Computer Program (DHCP) and a departmental clinical information system for the management of HIV patients. This system supports record-keeping, decision-making, reporting, and analysis. The database development was designed to overcome two impediments to successful implementations of clinical databases: (i) lack of a standard reference data model, and; (ii) lack of a universal standard for medical concept representation. Health Level Seven (HL7) is a standard protocol that specifies the implementation of interfaces between two computer applications (sender and receiver) from different vendors or sources of electronic data exchange in the health care environment. This eliminates or substantially reduces the custom interface programming and program maintenance that would otherwise be required. HL7 defines the data to be exchanged, the timing of the interchange, and the communication of errors to the application. The formats are generic in nature and must be configured to meet the needs of the two applications involved. The standard conceptually operates at the seventh level of the ISO model for Open Systems Interconnection (OSI). The OSI simply defines the data elements that are exchanged as abstract messages, and does not prescribe the exact bit stream of the messages that flow over the network. Lower level network software developed according to the OSI model may be used to encode and decode the actual bit stream. The OSI protocols are not universally implemented and, therefore, a set of encoding rules for defining the exact representation of a message must be specified. The VA has created an HL7 module to assist DHCP applications in exchanging health care information with other applications using the HL7 protocol. The DHCP HL7 module consists of a set of utility routines and files that provide a generic interface to the HL7 protocol for all DHCP applications. The VA's DHCP core modules are in standard use at 169 hospitals, and the role of the VA system in health care delivery has been discussed elsewhere. This development was performed at the Miami VA Medical Center Special Immunology Unit, where a database was created for an HIV patient registry in 1987. Over 2,300 patient have been entered into a database that supports a problem-oriented summary of the patient's clinical record. The interface to the VA DHCP was designed and implemented to capture information from the patient treatment file, pharmacy, laboratory, radiology, and other modules. We obtained a suite of programs for implementing the HL7 encoding rules from Columbia-Presbyterian Medical Center in New York, written in ANSI C. This toolkit isolates our application programs from the details of the HL7 encoding rules, and allows them to deal with abstract messages and the programming level. While HL7 has become a standard for healthcare message exchange, SQL (Structured Query Language) is the standard for database definition, data manipulation, and query. The target database (Stitt F.W. The Problem-Oriented Medical Synopsis: a patient-centered clinical information system. Proc 17 SCAMC. 1993:88-93) provides clinical workstation functionality. Medical concepts are encoded using a preferred terminology derived from over 15 sources that include the Unified Medical Language System and SNOMed International ( Stitt F.W. The Problem-Oriented Medical Synopsis: coding, indexing, and classification sub-model. Proc 18 SCAMC, 1994: in press). The databases were modeled using the Information Engineering CASE tools, and were written using relational database utilities, including embedded SQL in C (ESQL/C). We linked ESQL/C programs to the HL7 toolkit to allow data to be inserted, deleted, or updated, under transaction control. A graphical format will be used to display the entity-rel

  11. WEPP FuME Analysis for a North Idaho Site

    Treesearch

    William Elliot; Ina Sue Miller; David Hall

    2007-01-01

    A computer interface has been developed to assist with analyzing soil erosion rates associated with fuel management activities. This interface uses the Water Erosion Prediction Project (WEPP) model to predict sediment yields from hillslopes and road segments to the stream network. The simple interface has a large database of climates, vegetation files and forest soil...

  12. 78 FR 48738 - Self-Regulatory Organizations; C2 Options Exchange, Incorporated; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-09

    ... depend upon the Application Programming Interface (``API'') a Permit Holder is using.\\4\\ Currently, the Exchange offers two APIs: CBOE Market Interface (``CMi'') API and Financial Information eXchange (``FIX... available APIs, and if applicable, which version, it would like to use. \\4\\ An API is a computer interface...

  13. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Access and Interoperability

    NASA Astrophysics Data System (ADS)

    Fan, D.; He, B.; Xiao, J.; Li, S.; Li, C.; Cui, C.; Yu, C.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Mi, L.; Wan, W.; Wang, J.

    2015-09-01

    Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PI's proposals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.

  14. High Performance Data Transfer for Distributed Data Intensive Sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Chin; Cottrell, R 'Les' A.; Hanushevsky, Andrew B.

    We report on the development of ZX software providing high performance data transfer and encryption. The design scales in: computation power, network interfaces, and IOPS while carefully balancing the available resources. Two U.S. patent-pending algorithms help tackle data sets containing lots of small files and very large files, and provide insensitivity to network latency. It has a cluster-oriented architecture, using peer-to-peer technologies to ease deployment, operation, usage, and resource discovery. Its unique optimizations enable effective use of flash memory. Using a pair of existing data transfer nodes at SLAC and NERSC, we compared its performance to that of bbcp andmore » GridFTP and determined that they were comparable. With a proof of concept created using two four-node clusters with multiple distributed multi-core CPUs, network interfaces and flash memory, we achieved 155Gbps memory-to-memory over a 2x100Gbps link aggregated channel and 70Gbps file-to-file with encryption over a 5000 mile 100Gbps link.« less

  15. MarFS, a Near-POSIX Interface to Cloud Objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inman, Jeffrey Thornton; Vining, William Flynn; Ransom, Garrett Wilson

    The engineering forces driving development of “cloud” storage have produced resilient, cost-effective storage systems that can scale to 100s of petabytes, with good parallel access and bandwidth. These features would make a good match for the vast storage needs of High-Performance Computing datacenters, but cloud storage gains some of its capability from its use of HTTP-style Representational State Transfer (REST) semantics, whereas most large datacenters have legacy applications that rely on POSIX file-system semantics. MarFS is an open-source project at Los Alamos National Laboratory that allows us to present cloud-style object-storage as a scalable near-POSIX file system. We have alsomore » developed a new storage architecture to improve bandwidth and scalability beyond what’s available in commodity object stores, while retaining their resilience and economy. Additionally, we present a scheme for scaling the POSIX interface to allow billions of files in a single directory and trillions of files in total.« less

  16. MarFS, a Near-POSIX Interface to Cloud Objects

    DOE PAGES

    Inman, Jeffrey Thornton; Vining, William Flynn; Ransom, Garrett Wilson; ...

    2017-01-01

    The engineering forces driving development of “cloud” storage have produced resilient, cost-effective storage systems that can scale to 100s of petabytes, with good parallel access and bandwidth. These features would make a good match for the vast storage needs of High-Performance Computing datacenters, but cloud storage gains some of its capability from its use of HTTP-style Representational State Transfer (REST) semantics, whereas most large datacenters have legacy applications that rely on POSIX file-system semantics. MarFS is an open-source project at Los Alamos National Laboratory that allows us to present cloud-style object-storage as a scalable near-POSIX file system. We have alsomore » developed a new storage architecture to improve bandwidth and scalability beyond what’s available in commodity object stores, while retaining their resilience and economy. Additionally, we present a scheme for scaling the POSIX interface to allow billions of files in a single directory and trillions of files in total.« less

  17. A general UNIX interface for biocomputing and network information retrieval software.

    PubMed

    Kiong, B K; Tan, T W

    1993-10-01

    We describe a UNIX program, HYBROW, which can integrate without modification a wide range of UNIX biocomputing and network information retrieval software. HYBROW works in conjunction with a separate set of ASCII files containing embedded hypertext-like links. The program operates like a hypertext browser featuring five basic links: file link, execute-only link, execute-display link, directory-browse link and field-filling link. Useful features of the interface may be developed using combinations of these links with simple shell scripts and examples of these are briefly described. The system manager who supports biocomputing users should find the program easy to maintain, and useful in assisting new and infrequent users; it is also simple to incorporate new programs. Moreover, the individual user can customize the interface, create dynamic menus, hypertext a document, invoke shell scripts and new programs simply with a basic understanding of the UNIX operating system and any text editor. This program was written in C language and uses the UNIX curses and termcap libraries. It is freely available as a tar compressed file (by anonymous FTP from nuscc.nus.sg).

  18. An Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Halvorsen, Michele B.; Deng, Zhiqun

    Fishes and other marine mammals suffer a range of potential effects from intense sound sources generated by anthropogenic underwater processes such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording devices (USR) were built to monitor the acoustic sound pressure waves generated by those anthropogenic underwater activities, so the relevant processing software becomes indispensable for analyzing the audio files recorded by these USRs. However, existing software packages did not meet performance and flexibility requirements. In this paper, we provide a detailed description of a new software package, named Aquatic Acoustic Metrics Interface (AAMI), which is a Graphicalmore » User Interface (GUI) designed for underwater sound monitoring and analysis. In addition to the general functions, such as loading and editing audio files recorded by USRs, the software can compute a series of acoustic metrics in physical units, monitor the sound's influence on fish hearing according to audiograms from different species of fishes and marine mammals, and batch process the sound files. The detailed applications of the software AAMI will be discussed along with several test case scenarios to illustrate its functionality.« less

  19. XAFS Data Interchange: A single spectrum XAFS data file format.

    PubMed

    Ravel, B; Newville, M

    We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.

  20. XAFS Data Interchange: A single spectrum XAFS data file format

    NASA Astrophysics Data System (ADS)

    Ravel, B.; Newville, M.

    2016-05-01

    We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.

  1. The mzqLibrary – An open source Java library supporting the HUPO‐PSI quantitative proteomics standard

    PubMed Central

    Zhang, Huaizhong; Fan, Jun; Perkins, Simon; Pisconti, Addolorata; Simpson, Deborah M.; Bessant, Conrad; Hubbard, Simon; Jones, Andrew R.

    2015-01-01

    The mzQuantML standard has been developed by the Proteomics Standards Initiative for capturing, archiving and exchanging quantitative proteomic data, derived from mass spectrometry. It is a rich XML‐based format, capable of representing data about two‐dimensional features from LC‐MS data, and peptides, proteins or groups of proteins that have been quantified from multiple samples. In this article we report the development of an open source Java‐based library of routines for mzQuantML, called the mzqLibrary, and associated software for visualising data called the mzqViewer. The mzqLibrary contains routines for mapping (peptide) identifications on quantified features, inference of protein (group)‐level quantification values from peptide‐level values, normalisation and basic statistics for differential expression. These routines can be accessed via the command line, via a Java programming interface access or a basic graphical user interface. The mzqLibrary also contains several file format converters, including import converters (to mzQuantML) from OpenMS, Progenesis LC‐MS and MaxQuant, and exporters (from mzQuantML) to other standards or useful formats (mzTab, HTML, csv). The mzqViewer contains in‐built routines for viewing the tables of data (about features, peptides or proteins), and connects to the R statistical library for more advanced plotting options. The mzqLibrary and mzqViewer packages are available from https://code.google.com/p/mzq‐lib/. PMID:26037908

  2. The mzqLibrary--An open source Java library supporting the HUPO-PSI quantitative proteomics standard.

    PubMed

    Qi, Da; Zhang, Huaizhong; Fan, Jun; Perkins, Simon; Pisconti, Addolorata; Simpson, Deborah M; Bessant, Conrad; Hubbard, Simon; Jones, Andrew R

    2015-09-01

    The mzQuantML standard has been developed by the Proteomics Standards Initiative for capturing, archiving and exchanging quantitative proteomic data, derived from mass spectrometry. It is a rich XML-based format, capable of representing data about two-dimensional features from LC-MS data, and peptides, proteins or groups of proteins that have been quantified from multiple samples. In this article we report the development of an open source Java-based library of routines for mzQuantML, called the mzqLibrary, and associated software for visualising data called the mzqViewer. The mzqLibrary contains routines for mapping (peptide) identifications on quantified features, inference of protein (group)-level quantification values from peptide-level values, normalisation and basic statistics for differential expression. These routines can be accessed via the command line, via a Java programming interface access or a basic graphical user interface. The mzqLibrary also contains several file format converters, including import converters (to mzQuantML) from OpenMS, Progenesis LC-MS and MaxQuant, and exporters (from mzQuantML) to other standards or useful formats (mzTab, HTML, csv). The mzqViewer contains in-built routines for viewing the tables of data (about features, peptides or proteins), and connects to the R statistical library for more advanced plotting options. The mzqLibrary and mzqViewer packages are available from https://code.google.com/p/mzq-lib/. © 2015 The Authors. PROTEOMICS Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. CruiseViewer: SIOExplorer Graphical Interface to Metadata and Archives.

    NASA Astrophysics Data System (ADS)

    Sutton, D. W.; Helly, J. J.; Miller, S. P.; Chase, A.; Clark, D.

    2002-12-01

    We are introducing "CruiseViewer" as a prototype graphical interface for the SIOExplorer digital library project, part of the overall NSF National Science Digital Library (NSDL) effort. When complete, CruiseViewer will provide access to nearly 800 cruises, as well as 100 years of documents and images from the archives of the Scripps Institution of Oceanography (SIO). The project emphasizes data object accessibility, a rich metadata format, efficient uploading methods and interoperability with other digital libraries. The primary function of CruiseViewer is to provide a human interface to the metadata database and to storage systems filled with archival data. The system schema is based on the concept of an "arbitrary digital object" (ADO). Arbitrary in that if the object can be stored on a computer system then SIOExplore can manage it. Common examples are a multibeam swath bathymetry file, a .pdf cruise report, or a tar file containing all the processing scripts used on a cruise. We require a metadata file for every ADO in an ascii "metadata interchange format" (MIF), which has proven to be highly useful for operability and extensibility. Bulk ADO storage is managed using the Storage Resource Broker, SRB, data handling middleware developed at the San Diego Supercomputer Center that centralizes management and access to distributed storage devices. MIF metadata are harvested from several sources and housed in a relational (Oracle) database. For CruiseViewer, cgi scripts resident on an Apache server are the primary communication and service request handling tools. Along with the CruiseViewer java application, users can query, access and download objects via a separate method that operates through standard web browsers, http://sioexplorer.ucsd.edu. Both provide the functionability to query and view object metadata, and select and download ADOs. For the CruiseViewer application Java 2D is used to add a geo-referencing feature that allows users to select basemap images and have vector shapes representing query results mapped over the basemap in the image panel. The two methods together address a wide range of user access needs and will allow for widespread use of SIOExplorer.

  4. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.

  5. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Bockelman, B.; Blomer, J.

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less

  6. An integrated genetic data environment (GDE)-based LINUX interface for analysis of HIV-1 and other microbial sequences.

    PubMed

    De Oliveira, T; Miller, R; Tarin, M; Cassol, S

    2003-01-01

    Sequence databases encode a wealth of information needed to develop improved vaccination and treatment strategies for the control of HIV and other important pathogens. To facilitate effective utilization of these datasets, we developed a user-friendly GDE-based LINUX interface that reduces input/output file formatting. GDE was adapted to the Linux operating system, bioinformatics tools were integrated with microbe-specific databases, and up-to-date GDE menus were developed for several clinically important viral, bacterial and parasitic genomes. Each microbial interface was designed for local access and contains Genbank, BLAST-formatted and phylogenetic databases. GDE-Linux is available for research purposes by direct application to the corresponding author. Application-specific menus and support files can be downloaded from (http://www.bioafrica.net).

  7. An expert system shell for inferring vegetation characteristics

    NASA Technical Reports Server (NTRS)

    Harrison, P. Ann; Harrison, Patrick R.

    1992-01-01

    The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. The report describes the extensions that have been made to the first generation version of VEG. An interface to a file of unkown cover type data has been constructed. An interface that allows the results of VEG to be written to a file has been implemented. A learning system that learns class descriptions from a data base of historical cover type data and then uses the learned class descriptions to classify an unknown sample has been built. This system has an interface that integrates it into the rest of VEG. The VEG subgoal PROPORTION.GROUND.COVER has been completed and a number of additional techniques that infer the proportion ground cover of a sample have been implemented.

  8. Rosetta: Ensuring the Preservation and Usability of ASCII-based Data into the Future

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Arms, S. C.

    2015-12-01

    Field data obtained from dataloggers often take the form of comma separated value (CSV) ASCII text files. While ASCII based data formats have positive aspects, such as the ease of accessing the data from disk and the wide variety of tools available for data analysis, there are some drawbacks, especially when viewing the situation through the lens of data interoperability and stewardship. The Unidata data translation tool, Rosetta, is a web-based service that provides an easy, wizard-based interface for data collectors to transform their datalogger generated ASCII output into Climate and Forecast (CF) compliant netCDF files following the CF-1.6 discrete sampling geometries. These files are complete with metadata describing what data are contained in the file, the instruments used to collect the data, and other critical information that otherwise may be lost in one of many README files. The choice of the machine readable netCDF data format and data model, coupled with the CF conventions, ensures long-term preservation and interoperability, and that future users will have enough information to responsibly use the data. However, with the understanding that the observational community appreciates the ease of use of ASCII files, methods for transforming the netCDF back into a CSV or spreadsheet format are also built-in. One benefit of translating ASCII data into a machine readable format that follows open community-driven standards is that they are instantly able to take advantage of data services provided by the many open-source data server tools, such as the THREDDS Data Server (TDS). While Rosetta is currently a stand-alone service, this talk will also highlight efforts to couple Rosetta with the TDS, thus allowing self-publishing of thoroughly documented datasets by the data producers themselves.

  9. 40 CFR 60.288a - Reporting.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... test to generate a submission package file, which documents performance test data. You must then submit the file generated by the ERT through the EPA's Compliance and Emissions Data Reporting Interface (CEDRI), which can be accessed by logging in to the EPA's Central Data Exchange (CDX) (https://cdx.epa...

  10. A generic interface between COSMIC/NASTRAN and PATRAN (R)

    NASA Technical Reports Server (NTRS)

    Roschke, Paul N.; Premthamkorn, Prakit; Maxwell, James C.

    1990-01-01

    Despite its powerful analytical capabilities, COSMIC/NASTRAN lacks adequate post-processing adroitness. PATRAN, on the other hand is widely accepted for its graphical capabilities. A nonproprietary, public domain code mnemonically titled CPI (for COSMIC/NASTRAN-PATRAN Interface) is designed to manipulate a large number of files rapidly and efficiently between the two parent codes. In addition to PATRAN's results file preparation, CPI also prepares PATRAN's P/PLOT data files for xy plotting. The user is prompted for necessary information during an interactive session. Current implementation supports NASTRAN's displacement approach including the following rigid formats: (1) static analysis, (2) normal modal analysis, (3) direct transient response, and (4) modal transient response. A wide variety of data blocks are also supported. Error trapping is given special consideration. A sample session with CPI illustrates its simplicity and ease of use.

  11. Public-domain-software solution to data-access problems for numerical modelers

    USGS Publications Warehouse

    Jenter, Harry; Signell, Richard

    1992-01-01

    Unidata's network Common Data Form, netCDF, provides users with an efficient set of software for scientific-data-storage, retrieval, and manipulation. The netCDF file format is machine-independent, direct-access, self-describing, and in the public domain, thereby alleviating many problems associated with accessing output from large hydrodynamic models. NetCDF has programming interfaces in both the Fortran and C computer language with an interface to C++ planned for release in the future. NetCDF also has an abstract data type that relieves users from understanding details of the binary file structure; data are written and retrieved by an intuitive, user-supplied name rather than by file position. Users are aided further by Unidata's inclusion of the Common Data Language, CDL, a printable text-equivalent of the contents of a netCDF file. Unidata provides numerous operators and utilities for processing netCDF files. In addition, a number of public-domain and proprietary netCDF utilities from other sources are available at this time or will be available later this year. The U.S. Geological Survey has produced and is producing a number of public-domain netCDF utilities.

  12. ParamAP: Standardized Parameterization of Sinoatrial Node Myocyte Action Potentials.

    PubMed

    Rickert, Christian; Proenza, Catherine

    2017-08-22

    Sinoatrial node myocytes act as cardiac pacemaker cells by generating spontaneous action potentials (APs). Much information is encoded in sinoatrial AP waveforms, but both the analysis and the comparison of AP parameters between studies is hindered by the lack of standardized parameter definitions and the absence of automated analysis tools. Here we introduce ParamAP, a standalone cross-platform computational tool that uses a template-free detection algorithm to automatically identify and parameterize APs from text input files. ParamAP employs a graphic user interface with automatic and user-customizable input modes, and it outputs data files in text and PDF formats. ParamAP returns a total of 16 AP waveform parameters including time intervals such as the AP duration, membrane potentials such as the maximum diastolic potential, and rates of change of the membrane potential such as the diastolic depolarization rate. ParamAP provides a robust AP detection algorithm in combination with a standardized AP parameter analysis over a wide range of AP waveforms and firing rates, owing in part to the use of an iterative algorithm for the determination of the threshold potential and the diastolic depolarization rate that is independent of the maximum upstroke velocity, a parameter that can vary significantly among sinoatrial APs. Because ParamAP is implemented in Python 3, it is also highly customizable and extensible. In conclusion, ParamAP is a powerful computational tool that facilitates quantitative analysis and enables comparison of sinoatrial APs by standardizing parameter definitions and providing an automated work flow. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. 11 CFR 100.19 - File, filed or filing (2 U.S.C. 434(a)).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Time on the filing date, except that pre-election reports must have a postmark dated no later than 11:59 p.m. Eastern Standard/Daylight Time on the fifteenth day before the date of the election. (2... Standard/Daylight Time on the filing date. (d) 48-hour and 24-hour reports of independent expenditures—(1...

  14. Accessing files in an Internet: The Jade file system

    NASA Technical Reports Server (NTRS)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  15. Accessing files in an internet - The Jade file system

    NASA Technical Reports Server (NTRS)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  16. 77 FR 15445 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-15

    ... Effectiveness of Proposed Rule Change To Amend Online Form NMA, the Standardized Membership Application Form... online Form NMA, the standardized membership application form applicants must file pursuant to NASD Rule...), each applicant for FINRA membership must complete and electronically file the standardized online Form...

  17. Cinfony – combining Open Source cheminformatics toolkits behind a common interface

    PubMed Central

    O'Boyle, Noel M; Hutchison, Geoffrey R

    2008-01-01

    Background Open Source cheminformatics toolkits such as OpenBabel, the CDK and the RDKit share the same core functionality but support different sets of file formats and forcefields, and calculate different fingerprints and descriptors. Despite their complementary features, using these toolkits in the same program is difficult as they are implemented in different languages (C++ versus Java), have different underlying chemical models and have different application programming interfaces (APIs). Results We describe Cinfony, a Python module that presents a common interface to all three of these toolkits, allowing the user to easily combine methods and results from any of the toolkits. In general, the run time of the Cinfony modules is almost as fast as accessing the underlying toolkits directly from C++ or Java, but Cinfony makes it much easier to carry out common tasks in cheminformatics such as reading file formats and calculating descriptors. Conclusion By providing a simplified interface and improving interoperability, Cinfony makes it easy to combine complementary features of OpenBabel, the CDK and the RDKit. PMID:19055766

  18. Minimum Hamiltonian ascent trajectory evaluation (MASTRE) program (update to automatic flight trajectory design, performance prediction, and vehicle sizing for support of shuttle and shuttle derived vehicles) users manual

    NASA Technical Reports Server (NTRS)

    Lyons, J. T.; Borchers, William R.

    1993-01-01

    Documentation for the User Interface Program for the Minimum Hamiltonian Ascent Trajectory Evaluation (MASTRE) is provided. The User Interface Program is a separate software package designed to ease the user input requirements when using the MASTRE Trajectory Program. This document supplements documentation on the MASTRE Program that consists of the MASTRE Engineering Manual and the MASTRE Programmers Guide. The User Interface Program provides a series of menus and tables using the VAX Screen Management Guideline (SMG) software. These menus and tables allow the user to modify the MASTRE Program input without the need for learning the various program dependent mnemonics. In addition, the User Interface Program allows the user to modify and/or review additional input Namelist and data files, to build and review command files, to formulate and calculate mass properties related data, and to have a plotting capability.

  19. Vascular surgical data registries for small computers.

    PubMed

    Kaufman, J L; Rosenberg, N

    1984-08-01

    Recent designs for computer-based vascular surgical registries and clinical data bases have employed large centralized systems with formal programming and mass storage. Small computers, of the types created for office use or for word processing, now contain sufficient speed and memory storage capacity to allow construction of decentralized office-based registries. Using a standardized dictionary of terms and a method of data organization adapted to word processing, we have created a new vascular surgery data registry, "VASREG." Data files are organized without programming, and a limited number of powerful logical statements in English are used for sorting. The capacity is 25,000 records with current inexpensive memory technology. VASREG is adaptable to computers made by a variety of manufacturers, and interface programs are available for conversion of the word processor formated registry data into forms suitable for analysis by programs written in a standard programming language. This is a low-cost clinical data registry available to any physician. With a standardized dictionary, preparation of regional and national statistical summaries may be facilitated.

  20. WADeG Cell Phone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-09-01

    The on cell phone software captures the images from the CMOS camera periodically, stores the pictures, and periodically transmits those images over the cellular network to the server. The cell phone software consists of several modules: CamTest.cpp, CamStarter.cpp, StreamIOHandler .cpp, and covertSmartDevice.cpp. The camera application on the SmartPhone is CamStarter, which is "the" user interface for the camera system. The CamStarter user interface allows a user to start/stop the camera application and transfer files to the server. The CamStarter application interfaces to the CamTest application through registry settings. Both the CamStarter and CamTest applications must be separately deployed on themore » smartphone to run the camera system application. When a user selects the Start button in CamStarter, CamTest is created as a process. The smartphone begins taking small pictures (CAPTURE mode), analyzing those pictures for certain conditions, and saving those pictures on the smartphone. This process will terminate when the user selects the Stop button. The camtest code spins off an asynchronous thread, StreamIOHandler, to check for pictures taken by the camera. The received image is then tested by StreamIOHandler to see if it meets certain conditions. If those conditions are met, the CamTest program is notified through the setting of a registry key value and the image is saved in a designated directory in a custom BMP file which includes a header and the image data. When the user selects the Transfer button in the CamStarter user interface, the covertsmartdevice code is created as a process. Covertsmartdevice gets all of the files in a designated directory, opens a socket connection to the server, sends each file, and then terminates.« less

  1. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Chéreau, F.

    2008-08-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system. The main website for VirGO is at http://archive.eso.org/cms/virgo.

  2. GenomeHubs: simple containerized setup of a custom Ensembl database and web server for any species

    PubMed Central

    Kumar, Sujai; Stevens, Lewis; Blaxter, Mark

    2017-01-01

    Abstract As the generation and use of genomic datasets is becoming increasingly common in all areas of biology, the need for resources to collate, analyse and present data from one or more genome projects is becoming more pressing. The Ensembl platform is a powerful tool to make genome data and cross-species analyses easily accessible through a web interface and a comprehensive application programming interface. Here we introduce GenomeHubs, which provide a containerized environment to facilitate the setup and hosting of custom Ensembl genome browsers. This simplifies mirroring of existing content and import of new genomic data into the Ensembl database schema. GenomeHubs also provide a set of analysis containers to decorate imported genomes with results of standard analyses and functional annotations and support export to flat files, including EMBL format for submission of assemblies and annotations to International Nucleotide Sequence Database Collaboration. Database URL: http://GenomeHubs.org PMID:28605774

  3. CAGI: Computer Aided Grid Interface. A work in progress

    NASA Technical Reports Server (NTRS)

    Soni, Bharat K.; Yu, Tzu-Yi; Vaughn, David

    1992-01-01

    Progress realized in the development of a Computer Aided Grid Interface (CAGI) software system in integrating CAD/CAM geometric system output and/or Interactive Graphics Exchange Standard (IGES) files, geometry manipulations associated with grid generation, and robust grid generation methodologies is presented. CAGI is being developed in a modular fashion and will offer fast, efficient and economical response to geometry/grid preparation, allowing the ability to upgrade basic geometry in a step-by-step fashion interactively and under permanent visual control along with minimizing the differences between the actual hardware surface descriptions and corresponding numerical analog. The computer code GENIE is used as a basis. The Non-Uniform Rational B-Splines (NURBS) representation of sculptured surfaces is utilized for surface grid redistribution. The computer aided analysis system, PATRAN, is adapted as a CAD/CAM system. The progress realized in NURBS surface grid generation, the development of IGES transformer, and geometry adaption using PATRAN will be presented along with their applicability to grid generation associated with rocket propulsion applications.

  4. UUI: Reusable Spatial Data Services in Unified User Interface at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Petrenko, Maksym; Hegde, Mahabaleshwa; Bryant, Keith; Pham, Long B.

    2016-01-01

    Unified User Interface (UUI) is a next-generation operational data access tool that has been developed at Goddard Earth Sciences Data and Information Services Center(GES DISC) to provide a simple, unified, and intuitive one-stop shop experience for the key data services available at GES DISC, including subsetting (Simple Subset Wizard -SSW), granule file search (Mirador), plotting (Giovanni), and other legacy spatial data services. UUI has been built based on a flexible infrastructure of reusable web services self-contained building blocks that can easily be plugged into spatial applications, including third-party clients or services, to easily enable new functionality as new datasets and services become available. In this presentation, we will discuss our experience in designing UUI services based on open industry standards. We will also explain how the resulting framework can be used for a rapid development, deployment, and integration of spatial data services, facilitating efficient access and dissemination of spatial data sets.

  5. BrainIACS: a system for web-based medical image processing

    NASA Astrophysics Data System (ADS)

    Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.

    2009-02-01

    We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.

  6. 77 FR 47831 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-10

    ... to Reliability Standard CIP-002-4--Critical Cyber Asset Identification. Filed Date: 8/1/12. Accession... Corporation for Approval of an Interpretation to Reliability Standard CIP-004-4--Personnel and Training. Filed...

  7. Parallel file system with metadata distributed across partitioned key-value store c

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  8. A Python library for FAIRer access and deposition to the Metabolomics Workbench Data Repository.

    PubMed

    Smelter, Andrey; Moseley, Hunter N B

    2018-01-01

    The Metabolomics Workbench Data Repository is a public repository of mass spectrometry and nuclear magnetic resonance data and metadata derived from a wide variety of metabolomics studies. The data and metadata for each study is deposited, stored, and accessed via files in the domain-specific 'mwTab' flat file format. In order to improve the accessibility, reusability, and interoperability of the data and metadata stored in 'mwTab' formatted files, we implemented a Python library and package. This Python package, named 'mwtab', is a parser for the domain-specific 'mwTab' flat file format, which provides facilities for reading, accessing, and writing 'mwTab' formatted files. Furthermore, the package provides facilities to validate both the format and required metadata elements of a given 'mwTab' formatted file. In order to develop the 'mwtab' package we used the official 'mwTab' format specification. We used Git version control along with Python unit-testing framework as well as continuous integration service to run those tests on multiple versions of Python. Package documentation was developed using sphinx documentation generator. The 'mwtab' package provides both Python programmatic library interfaces and command-line interfaces for reading, writing, and validating 'mwTab' formatted files. Data and associated metadata are stored within Python dictionary- and list-based data structures, enabling straightforward, 'pythonic' access and manipulation of data and metadata. Also, the package provides facilities to convert 'mwTab' files into a JSON formatted equivalent, enabling easy reusability of the data by all modern programming languages that implement JSON parsers. The 'mwtab' package implements its metadata validation functionality based on a pre-defined JSON schema that can be easily specialized for specific types of metabolomics studies. The library also provides a command-line interface for interconversion between 'mwTab' and JSONized formats in raw text and a variety of compressed binary file formats. The 'mwtab' package is an easy-to-use Python package that provides FAIRer utilization of the Metabolomics Workbench Data Repository. The source code is freely available on GitHub and via the Python Package Index. Documentation includes a 'User Guide', 'Tutorial', and 'API Reference'. The GitHub repository also provides 'mwtab' package unit-tests via a continuous integration service.

  9. Providing Access to a Diverse Set of Global Reanalysis Dataset Collections

    NASA Astrophysics Data System (ADS)

    Schuster, D.; Worley, S. J.

    2015-12-01

    The National Center for Atmospheric Research (NCAR) Research Data Archive (RDA, http://rda.ucar.edu) provides open access to a variety of global reanalysis dataset collections to support atmospheric and related sciences research worldwide. These include products from the European Centre for Medium-Range Weather Forecasts (ECMWF), Japan Meteorological Agency (JMA), National Centers for Environmental Prediction (NCEP), National Oceanic and Atmospheric Administration (NOAA), and NCAR.All RDA hosted reanalysis collections are freely accessible to registered users through a variety of methods. Standard access methods include traditional browser and scripted HTTP file download. Enhanced downloads are available through the Globus GridFTP "fire and forget" data transfer service, which provides an efficient, reliable, and preferred alternative to traditional HTTP-based methods. For those that favor interoperable access using compatible tools, the Unidata THREDDS Data server provides remote access to complete reanalysis collections by virtual dataset aggregation "files". Finally, users can request data subsets and format conversions to be prepared for them through web interface form requests or web service API batch requests. This approach uses NCAR HPC and central file systems to effectively prepare products from the high-resolution and very large reanalyses archives. The presentation will include a detailed inventory of all RDA reanalysis dataset collection holdings, and highlight access capabilities to these collections through use case examples.

  10. Development of the TACOM (Tank Automotive Command) Thermal Imaging Model (TTIM). Volume 1. Technical Guide and User’s Manual.

    DTIC Science & Technology

    1984-12-01

    BLOCK DATA Default values for variables input by menus. LIBR Interface with frame I/O routines. SNSR Interface with sensor routines. ATMOS Interface with...Routines Included in Frame I/O Interface Routine Description LIBR Selects options for input or output to a data library. FRREAD Reads frame from file and/or...Layer", Journal of Applied Meteorology 20, pp. 242-249, March 1981. 15 L.J. Harding, Numerical Analysis and Applications Software Abstracts, Computing

  11. 17 CFR 232.13 - Date of filing; adjustment of filing date.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Standard Time or Eastern Daylight Saving Time, whichever is currently in effect, shall be deemed filed on.... Eastern Standard Time or Eastern Daylight Saving Time, whichever is currently in effect, shall be deemed... Daylight Savings Time, whichever is currently in effect, shall be deemed filed on the same business day. (4...

  12. 17 CFR 232.13 - Date of filing; adjustment of filing date.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Standard Time or Eastern Daylight Saving Time, whichever is currently in effect, shall be deemed filed on.... Eastern Standard Time or Eastern Daylight Saving Time, whichever is currently in effect, shall be deemed... Daylight Savings Time, whichever is currently in effect, shall be deemed filed on the same business day. (4...

  13. 17 CFR 232.13 - Date of filing; adjustment of filing date.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Standard Time or Eastern Daylight Saving Time, whichever is currently in effect, shall be deemed filed on.... Eastern Standard Time or Eastern Daylight Saving Time, whichever is currently in effect, shall be deemed... Daylight Savings Time, whichever is currently in effect, shall be deemed filed on the same business day. (4...

  14. 17 CFR 232.13 - Date of filing; adjustment of filing date.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Standard Time or Eastern Daylight Saving Time, whichever is currently in effect, shall be deemed filed on.... Eastern Standard Time or Eastern Daylight Saving Time, whichever is currently in effect, shall be deemed... Daylight Savings Time, whichever is currently in effect, shall be deemed filed on the same business day. (4...

  15. Executing medical logic modules expressed in ArdenML using Drools.

    PubMed

    Jung, Chai Young; Sward, Katherine A; Haug, Peter J

    2012-01-01

    The Arden Syntax is an HL7 standard language for representing medical knowledge as logic statements. Despite nearly 2 decades of availability, Arden Syntax has not been widely used. This has been attributed to the lack of a generally available compiler to implement the logic, to Arden's complex syntax, to the challenges of mapping local data to data references in the Medical Logic Modules (MLMs), or, more globally, to the general absence of decision support in healthcare computing. An XML representation (ArdenML) may partially address the technical challenges. MLMs created in ArdenML can be converted into executable files using standard transforms written in the Extensible Stylesheet Language Transformation (XSLT) language. As an example, we have demonstrated an approach to executing MLMs written in ArdenML using the Drools business rule management system. Extensions to ArdenML make it possible to generate a user interface through which an MLM developer can test for logical errors.

  16. Capabilities of NASA's Space Physics Data Facility as Resources to Enable the Heliophysics Virtual discipline Observatories (VxOs)

    NASA Technical Reports Server (NTRS)

    McGuire, Robert E.; Candey, Robert M.

    2007-01-01

    SPDF now supports a broad range of data, user services and other activities. These include: CDAWeb current multi-mission data graphics, listings, file subsetting and supersetting by time and parameters; SSCWeb and 3-D Java client orbit graphics, listings and conjunction queries; OMNIWeb 1/5/60 minute interplanetary parameters at Earth; product-level SPASE descriptions of data including holdings of nssdcftp; VSPO SPASE-based heliophysics-wide product site finding and data use;, standard Data format Translation Webservices (DTWS); metrics software and others. These data and services are available through standard user and application webservices interfaces, so middleware services such as the Heliophysics VxOs, and externally-developed clients or services, can readily leverage our data and capabilities. Beyond a short summary of the above, we will then conduct the talk as a conversation to evolving VxO needs and planned approach to leverage such existing and ongoing services.

  17. EnergyPlus™

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Originally developed in 1999, an updated version 8.8.0 with bug fixes was released on September 30th, 2017. EnergyPlus™ is a whole building energy simulation program that engineers, architects, and researchers use to model both energy consumption—for heating, cooling, ventilation, lighting and plug and process loads—and water use in buildings. EnergyPlus is a console-based program that reads input and writes output to text files. It ships with a number of utilities including IDF-Editor for creating input files using a simple spreadsheet-like interface, EP-Launch for managing input and output files and performing batch simulations, and EP-Compare for graphically comparing the results ofmore » two or more simulations. Several comprehensive graphical interfaces for EnergyPlus are also available. DOE does most of its work with EnergyPlus using the OpenStudio® software development kit and suite of applications. DOE releases major updates to EnergyPlus twice annually.« less

  18. MAGE (M-file/Mif Automatic GEnerator): A graphical interface tool for automatic generation of Object Oriented Micromagnetic Framework configuration files and Matlab scripts for results analysis

    NASA Astrophysics Data System (ADS)

    Chęciński, Jakub; Frankowski, Marek

    2016-10-01

    We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.

  19. LTCP 2D Graphical User Interface. Application Description and User's Guide

    NASA Technical Reports Server (NTRS)

    Ball, Robert; Navaz, Homayun K.

    1996-01-01

    A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.

  20. An editor for pathway drawing and data visualization in the Biopathways Workbench.

    PubMed

    Byrnes, Robert W; Cotter, Dawn; Maer, Andreia; Li, Joshua; Nadeau, David; Subramaniam, Shankar

    2009-10-02

    Pathway models serve as the basis for much of systems biology. They are often built using programs designed for the purpose. Constructing new models generally requires simultaneous access to experimental data of diverse types, to databases of well-characterized biological compounds and molecular intermediates, and to reference model pathways. However, few if any software applications provide all such capabilities within a single user interface. The Pathway Editor is a program written in the Java programming language that allows de-novo pathway creation and downloading of LIPID MAPS (Lipid Metabolites and Pathways Strategy) and KEGG lipid metabolic pathways, and of measured time-dependent changes to lipid components of metabolism. Accessed through Java Web Start, the program downloads pathways from the LIPID MAPS Pathway database (Pathway) as well as from the LIPID MAPS web server http://www.lipidmaps.org. Data arises from metabolomic (lipidomic), microarray, and protein array experiments performed by the LIPID MAPS consortium of laboratories and is arranged by experiment. Facility is provided to create, connect, and annotate nodes and processes on a drawing panel with reference to database objects and time course data. Node and interaction layout as well as data display may be configured in pathway diagrams as desired. Users may extend diagrams, and may also read and write data and non-lipidomic KEGG pathways to and from files. Pathway diagrams in XML format, containing database identifiers referencing specific compounds and experiments, can be saved to a local file for subsequent use. The program is built upon a library of classes, referred to as the Biopathways Workbench, that convert between different file formats and database objects. An example of this feature is provided in the form of read/construct/write access to models in SBML (Systems Biology Markup Language) contained in the local file system. Inclusion of access to multiple experimental data types and of pathway diagrams within a single interface, automatic updating through connectivity to an online database, and a focus on annotation, including reference to standardized lipid nomenclature as well as common lipid names, supports the view that the Pathway Editor represents a significant, practicable contribution to current pathway modeling tools.

  1. O-Plan2 VS Sipe-2 - A General Comparison

    DTIC Science & Technology

    1994-07-01

    on them that are satisfied by the planner. SIPE-2 has been extended to interface with General Electric’s Tachyon system for temporal reasoning...Temporal constraints in SIPE-2 are written to a file that is read by Tachyon . Tachyon processes these constraints and writes a file of narrowed time

  2. Sesame IO Library User Manual Version 8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, Hilary; Young, Ginger Ann

    This document is a user manual for SES_IO, a low-level library for reading and writing sesame files. The purpose of the SES_IO library is to provide a simple user interface for accessing and creating sesame files that does not change across sesame format type (such as binary, ascii, and xml).

  3. Object-oriented microcomputer software for earthquake seismology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroeger, G.C.

    1993-02-01

    A suite of graphically interactive applications for the retrieval, editing and modeling of earthquake seismograms have been developed using object-orientation programming methodology and the C++ language. Retriever is an application which allows the user to search for, browse, and extract seismic data from CD-ROMs produced by the National Earthquake Information Center (NEIC). The user can restrict the date, size, location and depth of desired earthquakes and extract selected data into a variety of common seismic file formats. Reformer is an application that allows the user to edit seismic data and data headers, and perform a variety of signal processing operationsmore » on that data. Synthesizer is a program for the generation and analysis of teleseismic P and SH synthetic seismograms. The program provides graphical manipulation of source parameters, crustal structures and seismograms, as well as near real-time response in generating synthetics for arbitrary flat-layered crustal structures. All three applications use class libraries developed for implementing geologic and seismic objects and views. Standard seismogram view objects and objects that encapsulate the reading and writing of different seismic data file formats are shared by all three applications. The focal mechanism views in Synthesizer are based on a generic stereonet view object. Interaction with the native graphical user interface is encapsulated in a class library in order to simplify the porting of the software to different operating systems and application programming interfaces. The software was developed on the Apple Macintosh and is being ported to UNIX/X-Window platforms.« less

  4. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  5. The SCEC Unified Community Velocity Model (UCVM) Software Framework for Distributing and Querying Seismic Velocity Models

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.

    2017-12-01

    Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications. In this poster, we summarize the key components of the UCVM framework and describe the impact it has had in various computational geoscientific applications.

  6. Reasoning about Users' Actions in a Graphical User Interface.

    ERIC Educational Resources Information Center

    Virvou, Maria; Kabassi, Katerina

    2002-01-01

    Describes a graphical user interface called IFM (Intelligent File Manipulator) that provides intelligent help to users. Explains two underlying reasoning mechanisms, one an adaptation of human plausible reasoning and one that performs goal recognition based on the effects of users' commands; and presents results of an empirical study that…

  7. Library Databases as Unexamined Classroom Technologies

    ERIC Educational Resources Information Center

    Faix, Allison

    2014-01-01

    In their 1994 article, "The Politics of the Interface: Power and its Exercise in Electronic Contact Zones," compositionists Cynthia Selfe and Richard Selfe give examples of how certain features of word processing software and other programs used in writing classrooms (including their icons, clip art, interfaces, and file structures) can…

  8. FERMI/GLAST Integrated Trending and Plotting System Release 5.0

    NASA Technical Reports Server (NTRS)

    Ritter, Sheila; Brumer, Haim; Reitan, Denise

    2012-01-01

    An Integrated Trending and Plotting System (ITPS) is a trending, analysis, and plotting system used by space missions to determine performance and status of spacecraft and its instruments. ITPS supports several NASA mission operational control centers providing engineers, ground controllers, and scientists with access to the entire spacecraft telemetry data archive for the life of the mission, and includes a secure Web component for remote access. FERMI/GLAST ITPS Release 5.0 features include the option to display dates (yyyy/ddd) instead of orbit numbers along orbital Long-Term Trend (LTT) plot axis, the ability to save statistics from daily production plots as image files, and removal of redundant edit/create Input Definition File (IDF) screens. Other features are a fix to address invalid packet lengths, a change in naming convention of image files in order to use in script, the ability to save all ITPS plot images (from Windows or the Web) as GIF or PNG format, the ability to specify ymin and ymax on plots where previously only the desired range could be specified, Web interface capability to plot IDFs that contain out-oforder page and plot numbers, and a fix to change all default file names to show yyyydddhhmmss time stamps instead of hhmmssdddyyyy. A Web interface capability sorts files based on modification date (with newest one at top), and the statistics block can be displayed via a Web interface. Via the Web, users can graphically view the volume of telemetry data from each day contained in the ITPS archive in the Web digest. The ITPS could be also used in nonspace fields that need to plot data or trend data, including financial and banking systems, aviation and transportation systems, healthcare and educational systems, sales and marketing, and housing and construction.

  9. TMS communications software. Volume 1: Computer interfaces

    NASA Technical Reports Server (NTRS)

    Brown, J. S.; Lenker, M. D.

    1979-01-01

    A prototype bus communications system, which is being used to support the Trend Monitoring System (TMS) as well as for evaluation of the bus concept is considered. Hardware and software interfaces to the MODCOMP and NOVA minicomputers are included. The system software required to drive the interfaces in each TMS computer is described. Documentation of other software for bus statistics monitoring and for transferring files across the bus is also included.

  10. Enhancing Web applications in radiology with Java: estimating MR imaging relaxation times.

    PubMed

    Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J

    1998-01-01

    Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.

  11. An overview of the National Space Science data Center Standard Information Retrieval System (SIRS)

    NASA Technical Reports Server (NTRS)

    Shapiro, A.; Blecher, S.; Verson, E. E.; King, M. L. (Editor)

    1974-01-01

    A general overview is given of the National Space Science Data Center (NSSDC) Standard Information Retrieval System. A description, in general terms, the information system that contains the data files and the software system that processes and manipulates the files maintained at the Data Center. Emphasis is placed on providing users with an overview of the capabilities and uses of the NSSDC Standard Information Retrieval System (SIRS). Examples given are taken from the files at the Data Center. Detailed information about NSSDC data files is documented in a set of File Users Guides, with one user's guide prepared for each file processed by SIRS. Detailed information about SIRS is presented in the SIRS Users Guide.

  12. Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitus, B.R.; Goddard, J.S.; Jatko, W.B.

    1993-06-01

    The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less

  13. [Population dynamics of ground carabid beetles and spiders in a wheat field along the wheat-alfalfa interface and their response to alfalfa mowing].

    PubMed

    Liu, Wen-Hui; Hu, Yi-Jun; Hu, Wen-Chao; Hong, Bo; Guan, Xiao-Qing; Ma, Shi-Yu; He, Da-Han

    2014-09-01

    Taking the wheat-alfalfa and wheat-wheat interfaces as model systems, sampling points were set by the method of pitfall trapping in the wheat field at the distances of 3 m, 6 m, 9 m, 12 m, 15 m, 18 m, 21 m, 24 m, and 27 m from the interface. The species composition and abundance of ground carabid beetles and spiders captured in pitfalls were investigated. The results showed that, to some extent there was an edge effect on species diversity and abundance of ground carabid beetles and spiders along the two interfaces. A marked edge effect was observed between 15 m and 18 m along the alfalfa-wheat interface, while no edge effect was found at a distance over 20 m. The edge effect along the wheat-wheat interface was weaker in comparison to the alfalfa-wheat interface. Alfalfa mowing resulted in the migration of a large number of ground carabid beetles and spiders to the adjacent wheat filed. During ten days since mowing, both species and abundance of ground carabid beetles and spiders increased in wheat filed within the distance of 20 m along the alfalfa-wheat interface. The spatial distribution of species diversity of ground beetles and spiders, together with the population abundance of the dominant Chlaenius pallipes and Pardosa astrigera, were depicted, which could directly indicate the migrating process of natural enemy from alfalfa to wheat field.

  14. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  15. Web Standard: PDF - When to Use, Document Metadata, PDF Sections

    EPA Pesticide Factsheets

    PDF files provide some benefits when used appropriately. PDF files should not be used for short documents ( 5 pages) unless retaining the format for printing is important. PDFs should have internal file metadata and meet section 508 standards.

  16. BAOBAB: a Java editor for large phylogenetic trees.

    PubMed

    Dutheil, J; Galtier, N

    2002-06-01

    BAOBAB is a Java user interface dedicated to viewing and editing large phylogenetic trees. Original features include: (i) a colour-mediated overview of magnified subtrees; (ii) copy/cut/paste of (sub)trees within or between windows; (iii) compressing/ uncompressing subtrees; and (iv) managing sequence files together with tree files. http://www.univ-montp2.fr/~genetix/.

  17. Putting Home Data Management into Perspective

    DTIC Science & Technology

    2009-12-01

    approaches. However, users of home and personal storage live it. Popular interfaces (e.g., iTunes , iPhoto, and even drop-down lists of recently...users of home and personal storage live it. Popular interfaces (e.g., iTunes , iPhoto, and even drop-down lists of recently-opened Word documents...live it. Popular interfaces (e.g., iTunes , iPhoto, and even drop- down lists of recently-opened Word documents) allow users to navigate file

  18. Configuration Management File Manager Developed for Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    1997-01-01

    One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

  19. Globus Identity, Access, and Data Management: Platform Services for Collaborative Science

    NASA Astrophysics Data System (ADS)

    Ananthakrishnan, R.; Foster, I.; Wagner, R.

    2016-12-01

    Globus is software-as-a-service for research data management, developed at, and operated by, the University of Chicago. Globus, accessible at www.globus.org, provides high speed, secure file transfer; file sharing directly from existing storage systems; and data publication to institutional repositories. 40,000 registered users have used Globus to transfer tens of billions of files totaling hundreds of petabytes between more than 10,000 storage systems within campuses and national laboratories in the US and internationally. Web, command line, and REST interfaces support both interactive use and integration into applications and infrastructures. An important component of the Globus system is its foundational identity and access management (IAM) platform service, Globus Auth. Both Globus research data management and other applications use Globus Auth for brokering authentication and authorization interactions between end-users, identity providers, resource servers (services), and a range of clients, including web, mobile, and desktop applications, and other services. Compliant with important standards such as OAuth, OpenID, and SAML, Globus Auth provides mechanisms required for an extensible, integrated ecosystem of services and clients for the research and education community. It underpins projects such as the US National Science Foundation's XSEDE system, NCAR's Research Data Archive, and the DOE Systems Biology Knowledge Base. Current work is extending Globus services to be compliant with FEDRAMP standards for security assessment, authorization, and monitoring for cloud services. We will present Globus IAM solutions and give examples of Globus use in various projects for federated access to resources. We will also describe how Globus Auth and Globus research data management capabilities enable rapid development and low-cost operations of secure data sharing platforms that leverage Globus services and integrate them with local policy and security.

  20. ARC SDK: A toolbox for distributed computing and data applications

    NASA Astrophysics Data System (ADS)

    Skou Andersen, M.; Cameron, D.; Lindemann, J.

    2014-06-01

    Grid middleware suites provide tools to perform the basic tasks of job submission and retrieval and data access, however these tools tend to be low-level, operating on individual jobs or files and lacking in higher-level concepts. User communities therefore generally develop their own application-layer software catering to their specific communities' needs on top of the Grid middleware. It is thus important for the Grid middleware to provide a friendly, well documented and simple to use interface for the applications to build upon. The Advanced Resource Connector (ARC), developed by NorduGrid, provides a Software Development Kit (SDK) which enables applications to use the middleware for job and data management. This paper presents the architecture and functionality of the ARC SDK along with an example graphical application developed with the SDK. The SDK consists of a set of libraries accessible through Application Programming Interfaces (API) in several languages. It contains extensive documentation and example code and is available on multiple platforms. The libraries provide generic interfaces and rely on plugins to support a given technology or protocol and this modular design makes it easy to add a new plugin if the application requires supporting additional technologies.The ARC Graphical Clients package is a graphical user interface built on top of the ARC SDK and the Qt toolkit and it is presented here as a fully functional example of an application. It provides a graphical interface to enable job submission and management at the click of a button, and allows data on any Grid storage system to be manipulated using a visual file system hierarchy, as if it were a regular file system.

  1. 7 CFR 27.14 - Filing of classification and Micronaire determination requests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Filing of classification and Micronaire determination... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Requests § 27.14 Filing of classification and Micronaire determination requests...

  2. 7 CFR 27.14 - Filing of classification and Micronaire determination requests.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Filing of classification and Micronaire determination... STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Requests § 27.14 Filing of classification and Micronaire determination requests...

  3. LiPD and CSciBox: A Case Study in Why Data Standards are Important for Paleoscience

    NASA Astrophysics Data System (ADS)

    Weiss, I.; Bradley, E.; McKay, N.; Emile-Geay, J.; de Vesine, L. R.; Anderson, K. A.; White, J. W. C.; Marchitto, T. M., Jr.

    2016-12-01

    CSciBox [1] is an integrated software system that helps geoscientists build and evaluate age models. Its user chooses from a number of built-in analysis tools, composing them into an analysis workflow and applying it to paleoclimate proxy datasets. CSciBox employs modern database technology to store both the data and the analysis results in an easily accessible and searchable form, and offers the user access to the computational toolbox, the data, and the results via a graphical user interface and a sophisticated plotter. Standards are a staple of modern life, and underlie any form of automation. Without data standards, it is difficult, if not impossible, to construct effective computer tools for paleoscience analysis. The LiPD (Linked Paleo Data) framework [2] enables the storage of both data and metadata in systematic, meaningful, machine-readable ways. LiPD has been a primary enabler of CSciBox's goals of usability, interoperability, and reproducibility. Building LiPD capabilities into CSciBox's importer, for instance, eliminated the need to ask the user about file formats, variable names, relationships between columns in the input file, etc. Building LiPD capabilities into the exporter facilitated the storage of complete details about the input data-provenance, preprocessing steps, etc.-as well as full descriptions of any analyses that were performed using the CSciBox tool, along with citations to appropriate references. This comprehensive collection of data and metadata, which is all linked together in a semantically meaningful, machine-readable way, not only completely documents the analyses and makes them reproducible. It also enables interoperability with any other software system that employs the LiPD standard. [1] www.cs.colorado.edu/ lizb/cscience.html[2] McKay & Emile-Geay, Climate of the Past 12:1093 (2016)

  4. 77 FR 67424 - Self-Regulatory Organizations; C2 Options Exchange, Incorporated; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-09

    ... available to Participants various application programming interfaces (``APIs''),\\4\\ such as CBOE Market... certain order and trade data to the Exchange, which data the Exchange uses to conduct surveillances of its markets and Participants. \\4\\ APIs are computer programs that allow Participants to interface with the...

  5. 76 FR 10628 - Self-Regulatory Organizations; the Depository Trust Company; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-25

    ... Change Regarding Providing Participants With a New Optional Settlement Web Interface February 22, 2011... Rule Change The proposed rule change will establish a new browser-based interface, the ``Settlement Web... Browser System (``PBS'').\\4\\ Based on request from its Participants, DTC has created a more user-friendly...

  6. Connecting the Library's Patron Database to Campus Administrative Software: Simplifying the Library's Accounts Receivable Process

    ERIC Educational Resources Information Center

    Oliver, Astrid; Dahlquist, Janet; Tankersley, Jan; Emrich, Beth

    2010-01-01

    This article discusses the processes that occurred when the Library, Controller's Office, and Information Technology Department agreed to create an interface between the Library's Innovative Interfaces patron database and campus administrative software, Banner, using file transfer protocol, in an effort to streamline the Library's accounts…

  7. ISMRM Raw Data Format: A Proposed Standard for MRI Raw Datasets

    PubMed Central

    Inati, Souheil J.; Naegele, Joseph D.; Zwart, Nicholas R.; Roopchansingh, Vinai; Lizak, Martin J.; Hansen, David C.; Liu, Chia-Ying; Atkinson, David; Kellman, Peter; Kozerke, Sebastian; Xue, Hui; Campbell-Washburn, Adrienne E.; Sørensen, Thomas S.; Hansen, Michael S.

    2015-01-01

    Purpose This work proposes the ISMRM Raw Data (ISMRMRD) format as a common MR raw data format, which promotes algorithm and data sharing. Methods A file format consisting of a flexible header and tagged frames of k-space data was designed. Application Programming Interfaces were implemented in C/C++, MATLAB, and Python. Converters for Bruker, General Electric, Philips, and Siemens proprietary file formats were implemented in C++. Raw data were collected using MRI scanners from four vendors, converted to ISMRMRD format, and reconstructed using software implemented in three programming languages (C++, MATLAB, Python). Results Images were obtained by reconstructing the raw data from all vendors. The source code, raw data, and images comprising this work are shared online, serving as an example of an image reconstruction project following a paradigm of reproducible research. Conclusion The proposed raw data format solves a practical problem for the MRI community. It may serve as a foundation for reproducible research and collaborations. The ISMRMRD format is a completely open and community-driven format, and the scientific community is invited (including commercial vendors) to participate either as users or developers. PMID:26822475

  8. Computer Program Development Specification for Ada Integrated Environment: KAPSE (Kernel Ada Programming Support Environment)/Database, Type B5, B5-AIE(1).KAPSE(1).

    DTIC Science & Technology

    1982-11-12

    File 1/0 Prgram Invocation Other Access M and Control Services KAPSE/Host Interface most Operating System Peripherals/ 01 su ?eetworks 6282318-2 Figure 3...3.2.4.3.8.5 Transitory Windows The TRANSITORY flag is used to prevent permanent dependence on temporary windows created simply for focusing on a part of the...KAPSE/Tool interfaces in terms of these low-level host-independent interfaces. In addition, the KAPSE/Host interface packages prevent the application

  9. A Braille Interface to the Texas Instruments SR-52 Programmable Calculator.

    DTIC Science & Technology

    1976-09-21

    F / _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ AD—A039 US PENNSYLVANIA STATE UNIV JNIVtRSITY PARK APPLIED RESE——ETc F/G 9/2 * BRAILLE INTERFACE to PC TEXAS...UNCLASSiFIED A BRAILLE INTERFACE TO THE TEXAS INSTRUMENTS SR-52 PR0CRA)*~ABLE CALCULATOR C. P~ JANOTA Technical Memorandum D D C File No. TM 76-244 i...SUPPLEMENTARY NOTES ~~~~ ~ 15. KEY WORDS (ConUnti. on ,.v.ra• .Id. If nøc....ry and td~n t tf y by block ntanb.r) AIDS TO HANDICAPPED BRAILLE INTERFACE

  10. Ground Software Maintenance Facility (GSMF) system manual

    NASA Technical Reports Server (NTRS)

    Derrig, D.; Griffith, G.

    1986-01-01

    The Ground Software Maintenance Facility (GSMF) is designed to support development and maintenance of spacelab ground support software. THE GSMF consists of a Perkin Elmer 3250 (Host computer) and a MITRA 125s (ATE computer), with appropriate interface devices and software to simulate the Electrical Ground Support Equipment (EGSE). This document is presented in three sections: (1) GSMF Overview; (2) Software Structure; and (3) Fault Isolation Capability. The overview contains information on hardware and software organization along with their corresponding block diagrams. The Software Structure section describes the modes of software structure including source files, link information, and database files. The Fault Isolation section describes the capabilities of the Ground Computer Interface Device, Perkin Elmer host, and MITRA ATE.

  11. Progress in defining a standard for file-level metadata

    NASA Technical Reports Server (NTRS)

    Williams, Joel; Kobler, Ben

    1996-01-01

    In the following narrative, metadata required to locate a file on tape or collection of tapes will be referred to as file-level metadata. This paper discribes the rationale for and the history of the effort to define a standard for this metadata.

  12. fgui: A Method for Automatically Creating Graphical User Interfaces for Command-Line R Packages

    PubMed Central

    Hoffmann, Thomas J.; Laird, Nan M.

    2009-01-01

    The fgui R package is designed for developers of R packages, to help rapidly, and sometimes fully automatically, create a graphical user interface for a command line R package. The interface is built upon the Tcl/Tk graphical interface included in R. The package further facilitates the developer by loading in the help files from the command line functions to provide context sensitive help to the user with no additional effort from the developer. Passing a function as the argument to the routines in the fgui package creates a graphical interface for the function, and further options are available to tweak this interface for those who want more flexibility. PMID:21625291

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, R.A.

    A major goal of the Analysis of Large Data Sets (ALDS) research project at Pacific Northwest Laboratory (PNL) is to provide efficient data organization, storage, and access capabilities for statistical applications involving large amounts of data. As part of the effort to achieve this goal, a self-describing binary (SDB) data file structure has been designed and implemented together with a set of basic data manipulation functions and supporting SDB data access routines. Logical and physical data descriptors are stored in SDB files preceding the data values. SDB files thus provide a common data representation for interfacing diverse software components. Thismore » paper describes the various types of data descriptors and data structures permitted by the file design. Data buffering, file segmentation, and a segment overflow handler are also discussed.« less

  14. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan

    PubMed Central

    Kinjo, Akira R.; Yamashita, Reiko; Nakamura, Haruki

    2010-01-01

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/ PMID:20798081

  15. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.

    PubMed

    Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki

    2010-08-25

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/

  16. The Design of an Interface to a Programming System and MENUNIX: A Menu-Based Interface to UNIX (User Manual). Two Papers in Cognitive Engineering. Technical Report No. 8105.

    ERIC Educational Resources Information Center

    Perlman, Gary

    This report consists of two papers on MENUNIX, an experimental interface to the approximately 300 programs and files available on the Berkeley UNIX 4.0 version of the UNIX operating system. The first paper discusses some of the psychological concerns involved in the design of MENUNIX; the second is a tutorial user manual for MENUNIX, in which the…

  17. 29 CFR 402.7 - Effect of acknowledgment and filing by the Office of Labor-Management Standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-Management Standards. 402.7 Section 402.7 Labor Regulations Relating to Labor OFFICE OF LABOR-MANAGEMENT STANDARDS, DEPARTMENT OF LABOR LABOR-MANAGEMENT STANDARDS LABOR ORGANIZATION INFORMATION REPORTS § 402.7 Effect of acknowledgment and filing by the Office of Labor-Management Standards. Acknowledgment by the...

  18. 29 CFR 402.7 - Effect of acknowledgment and filing by the Office of Labor-Management Standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-Management Standards. 402.7 Section 402.7 Labor Regulations Relating to Labor OFFICE OF LABOR-MANAGEMENT STANDARDS, DEPARTMENT OF LABOR LABOR-MANAGEMENT STANDARDS LABOR ORGANIZATION INFORMATION REPORTS § 402.7 Effect of acknowledgment and filing by the Office of Labor-Management Standards. Acknowledgment by the...

  19. Pulser: user-friendly, graphical user-interface based software for controlling stimuli during data acquisition with Spike2 for Windows.

    PubMed

    Lidierth, Malcolm

    2005-02-15

    This paper describes software that runs in the Spike2 for Windows environment and provides a versatile tool for generating stimuli during data acquisition from the 1401 family of interfaces (CED, UK). A graphical user interface (GUI) is used to provide dynamic control of stimulus timing. Both single stimuli and trains of stimuli can be generated. The pulse generation routines make use of programmable variables within the interface and allow these to be rapidly changed during an experiment. The routines therefore provide the ease-of-use associated with external, stand-alone pulse generators. Complex stimulus protocols can be loaded from an external text file and facilities are included to create these files through the GUI. The software consists of a Spike2 script that runs in the host PC, and accompanying routines written in the 1401 sequencer control code, that run in the 1401 interface. Handshaking between the PC and the interface card are built into the routines and provides for full integration of sampling, analysis and stimulus generation during an experiment. Control of the 1401 digital-to-analogue converters is also provided; this allows control of stimulus amplitude as well as timing and also provides a sample-hold feature that may be used to remove DC offsets and drift from recorded data.

  20. New capabilities in the HENP grand challenge storage access systemand its application at RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, L.; Gibbard, B.; Malon, D.

    2000-04-25

    The High Energy and Nuclear Physics Data Access GrandChallenge project has developed an optimizing storage access softwaresystem that was prototyped at RHIC. It is currently undergoingintegration with the STAR experiment in preparation for data taking thatstarts in mid-2000. The behavior and lessons learned in the RHIC MockData Challenge exercises are described as well as the observedperformance under conditions designed to characterize scalability. Up to250 simultaneous queries were tested and up to 10 million events across 7event components were involved in these queries. The system coordinatesthe staging of "bundles" of files from the HPSS tape system, so that allthe needed componentsmore » of each event are in disk cache when accessed bythe application software. The caching policy algorithm for thecoordinated bundle staging is described in the paper. The initialprototype implementation interfaced to the Objectivity/DB. In this latestversion, it evolved to work with arbitrary files and use CORBA interfacesto the tag database and file catalog services. The interface to the tagdatabase and the MySQL-based file catalog services used by STAR aredescribed along with the planned usage scenarios.« less

  1. DICOM: a standard for medical imaging

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Bidgood, W. Dean

    1993-01-01

    Since 1983, the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) have been engaged in developing standards related to medical imaging. This alliance of users and manufacturers was formed to meet the needs of the medical imaging community as its use of digital imaging technology increased. The development of electronic picture archiving and communications systems (PACS), which could connect a number of medical imaging devices together in a network, led to the need for a standard interface and data structure for use on imaging equipment. Since medical image files tend to be very large and include much text information along with the image, the need for a fast, flexible, and extensible standard was quickly established. The ACR-NEMA Digital Imaging and Communications Standards Committee developed a standard which met these needs. The standard (ACR-NEMA 300-1988) was first published in 1985 and revised in 1988. It is increasingly available from equipment manufacturers. The current work of the ACR- NEMA Committee has been to extend the standard to incorporate direct network connection features, and build on standards work done by the International Standards Organization in its Open Systems Interconnection series. This new standard, called Digital Imaging and Communication in Medicine (DICOM), follows an object-oriented design methodology and makes use of as many existing internationally accepted standards as possible. This paper gives a brief overview of the requirements for communications standards in medical imaging, a history of the ACR-NEMA effort and what it has produced, and a description of the DICOM standard.

  2. Standard Populations (Millions) for Age-Adjustment - SEER Population Datasets

    Cancer.gov

    Download files containing standard population data for use in statististical software. The files contain the same data distributed with SEER*Stat software. You can also view the standard populations, either 19 age groups or single ages.

  3. User Interface Design in Medical Distributed Web Applications.

    PubMed

    Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara

    2016-01-01

    User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.

  4. The IEO Data Center Management System: Tools for quality control, analysis and access marine data

    NASA Astrophysics Data System (ADS)

    Casas, Antonia; Garcia, Maria Jesus; Nikouline, Andrei

    2010-05-01

    Since 1994 the Data Centre of the Spanish Oceanographic Institute develops system for archiving and quality control of oceanographic data. The work started in the frame of the European Marine Science & Technology Programme (MAST) when a consortium of several Mediterranean Data Centres began to work on the MEDATLAS project. Along the years, old software modules for MS DOS were rewritten, improved and migrated to Windows environment. Oceanographic data quality control includes now not only vertical profiles (mainly CTD and bottles observations) but also time series of currents and sea level observations. New powerful routines for analysis and for graphic visualization were added. Data presented originally in ASCII format were organized recently in an open source MySQL database. Nowadays, the IEO, as part of SeaDataNet Infrastructure, has designed and developed a new information system, consistent with the ISO 19115 and SeaDataNet standards, in order to manage the large and diverse marine data and information originated in Spain by different sources, and to interoperate with SeaDataNet. The system works with data stored in ASCII files (MEDATLAS, ODV) as well as data stored within the relational database. The components of the system are: 1.MEDATLAS Format and Quality Control - QCDAMAR: Quality Control of Marine Data. Main set of tools for working with data presented as text files. Includes extended quality control (searching for duplicated cruises and profiles, checking date, position, ship velocity, constant profiles, spikes, density inversion, sounding, acceptable data, impossible regional values,...) and input/output filters. - QCMareas: A set of procedures for the quality control of tide gauge data according to standard international Sea Level Observing System. These procedures include checking for unexpected anomalies in the time series, interpolation, filtering, computation of basic statistics and residuals. 2. DAMAR: A relational data base (MySql) designed to manage the wide variety of marine information as common vocabularies, Catalogues (CSR & EDIOS), Data and Metadata. 3.Other tools for analysis and data management - Import_DB: Script to import data and metadata from the Medatlas ASCII files into the database. - SelDamar/Selavi: interface with the database for local and web access. Allows selective retrievals applying the criteria introduced by the user, as geographical bounds, data responsible, cruises, platform, time periods, etc. Includes also statistical reference values calculation, plotting of original and mean profiles together with vertical interpolation. - ExtractDAMAR: Script to extract data when they are archived in ASCII files that meet the criteria upon an user request through SelDamar interface and export them in ODV format, making also a unit conversion.

  5. The Binding Database: data management and interface design.

    PubMed

    Chen, Xi; Lin, Yuhmei; Liu, Ming; Gilson, Michael K

    2002-01-01

    The large and growing body of experimental data on biomolecular binding is of enormous value in developing a deeper understanding of molecular biology, in developing new therapeutics, and in various molecular design applications. However, most of these data are found only in the published literature and are therefore difficult to access and use. No existing public database has focused on measured binding affinities and has provided query capabilities that include chemical structure and sequence homology searches. We have created Binding DataBase (BindingDB), a public, web-accessible database of measured binding affinities. BindingDB is based upon a relational data specification for describing binding measurements via Isothermal Titration Calorimetry (ITC) and enzyme inhibition. A corresponding XML Document Type Definition (DTD) is used to create and parse intermediate files during the on-line deposition process and will also be used for data interchange, including collection of data from other sources. The on-line query interface, which is constructed with Java Servlet technology, supports standard SQL queries as well as searches for molecules by chemical structure and sequence homology. The on-line deposition interface uses Java Server Pages and JavaBean objects to generate dynamic HTML and to store intermediate results. The resulting data resource provides a range of functionality with brisk response-times, and lends itself well to continued development and enhancement.

  6. Jefferson Lab Mass Storage and File Replication Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Bird; Ying Chen; Bryan Hess

    Jefferson Lab has implemented a scalable, distributed, high performance mass storage system - JASMine. The system is entirely implemented in Java, provides access to robotic tape storage and includes disk cache and stage manager components. The disk manager subsystem may be used independently to manage stand-alone disk pools. The system includes a scheduler to provide policy-based access to the storage systems. Security is provided by pluggable authentication modules and is implemented at the network socket level. The tape and disk cache systems have well defined interfaces in order to provide integration with grid-based services. The system is in production andmore » being used to archive 1 TB per day from the experiments, and currently moves over 2 TB per day total. This paper will describe the architecture of JASMine; discuss the rationale for building the system, and present a transparent 3rd party file replication service to move data to collaborating institutes using JASMine, XM L, and servlet technology interfacing to grid-based file transfer mechanisms.« less

  7. Aquatic Acoustic Metrics Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-12-18

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals.more » In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.« less

  8. 76 FR 52032 - Self-Regulatory Organizations; BATS Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... purpose of making markets in options contracts traded on the Exchange and that is vested with the rights... Exchange currently offers an order-based market making interface for the BATS Options trading platform... to introduce a bulk-quoting interface for market makers in order to offer an additional market making...

  9. Guide to the Stand-Damage Model interface management system

    Treesearch

    George Racin; J. J. Colbert

    1995-01-01

    This programmer's support document describes the Gypsy Moth Stand-Damage Model interface management system. Management of stand-damage data made it necessary to define structures to store data and provide the mechanisms to manipulate these data. The software provides a user-friendly means to manipulate files, graph and manage outputs, and edit input data. The...

  10. Common Database Interface for Heterogeneous Software Engineering Tools.

    DTIC Science & Technology

    1987-12-01

    SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager

  11. 75 FR 30081 - Self-Regulatory Organizations; NASDAQ OMX PHLX, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... Rule Change Relating to the Risk Management Interface May 24, 2010. Pursuant to Section 19(b)(1) of the... proposes to effect an information-related enhancement to the current Risk Management Feed Interface in Phlx... trade updates through a Risk Management Feed known as the ``RMP''.\\3\\ The updates include the members...

  12. SEQ-REVIEW: A tool for reviewing and checking spacecraft sequences

    NASA Astrophysics Data System (ADS)

    Maldague, Pierre F.; El-Boushi, Mekki; Starbird, Thomas J.; Zawacki, Steven J.

    1994-11-01

    A key component of JPL's strategy to make space missions faster, better and cheaper is the Advanced Multi-Mission Operations System (AMMOS), a ground software intensive system currently in use and in further development. AMMOS intends to eliminate the cost of re-engineering a ground system for each new JPL mission. This paper discusses SEQ-REVIEW, a component of AMMOS that was designed to facilitate and automate the task of reviewing and checking spacecraft sequences. SEQ-REVIEW is a smart browser for inspecting files created by other sequence generation tools in the AMMOS system. It can parse sequence-related files according to a computer-readable version of a 'Software Interface Specification' (SIS), which is a standard document for defining file formats. It lets users display one or several linked files and check simple constraints using a Basic-like 'Little Language'. SEQ-REVIEW represents the first application of the Quality Function Development (QFD) method to sequence software development at JPL. The paper will show how the requirements for SEQ-REVIEW were defined and converted into a design based on object-oriented principles. The process starts with interviews of potential users, a small but diverse group that spans multiple disciplines and 'cultures'. It continues with the development of QFD matrices that related product functions and characteristics to user-demanded qualities. These matrices are then turned into a formal Software Requirements Document (SRD). The process concludes with the design phase, in which the CRC (Class, Responsibility, Collaboration) approach was used to convert requirements into a blueprint for the final product.

  13. SEQ-REVIEW: A tool for reviewing and checking spacecraft sequences

    NASA Technical Reports Server (NTRS)

    Maldague, Pierre F.; El-Boushi, Mekki; Starbird, Thomas J.; Zawacki, Steven J.

    1994-01-01

    A key component of JPL's strategy to make space missions faster, better and cheaper is the Advanced Multi-Mission Operations System (AMMOS), a ground software intensive system currently in use and in further development. AMMOS intends to eliminate the cost of re-engineering a ground system for each new JPL mission. This paper discusses SEQ-REVIEW, a component of AMMOS that was designed to facilitate and automate the task of reviewing and checking spacecraft sequences. SEQ-REVIEW is a smart browser for inspecting files created by other sequence generation tools in the AMMOS system. It can parse sequence-related files according to a computer-readable version of a 'Software Interface Specification' (SIS), which is a standard document for defining file formats. It lets users display one or several linked files and check simple constraints using a Basic-like 'Little Language'. SEQ-REVIEW represents the first application of the Quality Function Development (QFD) method to sequence software development at JPL. The paper will show how the requirements for SEQ-REVIEW were defined and converted into a design based on object-oriented principles. The process starts with interviews of potential users, a small but diverse group that spans multiple disciplines and 'cultures'. It continues with the development of QFD matrices that related product functions and characteristics to user-demanded qualities. These matrices are then turned into a formal Software Requirements Document (SRD). The process concludes with the design phase, in which the CRC (Class, Responsibility, Collaboration) approach was used to convert requirements into a blueprint for the final product.

  14. A Geometry Based Infra-Structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    The computational steps traditionally taken for most engineering analysis suites (computational fluid dynamics (CFD), structural analysis, heat transfer and etc.) are: (1) Surface Generation -- usually by employing a Computer Assisted Design (CAD) system; (2) Grid Generation -- preparing the volume for the simulation; (3) Flow Solver -- producing the results at the specified operational point; (4) Post-processing Visualization -- interactively attempting to understand the results. For structural analysis, integrated systems can be obtained from a number of commercial vendors. These vendors couple directly to a number of CAD systems and are executed from within the CAD Graphical User Interface (GUI). It should be noted that the structural analysis problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go to Initial Graphics Exchange Specification (IGES) or Standard Exchange Program (STEP) files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D data formats). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Specifically the problems with this procedure are:(1) File based -- Information flows from one step to the next via data files with formats specified for that procedure. File standards, when they exist, are wholly inadequate. For example, geometry from CAD systems (transmitted via IGES files) is defined as disjoint surfaces and curves (as well as masses of other information of no interest for the Grid Generator). This is particularly onerous for modern CAD systems based on solid modeling. The part was a proper solid and in the translation to IGES has lost this important characteristic. STEP is another standard for CAD data that exists and supports the concept of a solid. The problem with STEP is that a solid modeling geometry kernel is required to query and manipulate the data within this type of file. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. Adroit multi-block methods are not far behind. This means that a million node steady-state solution can be computed on the order of hours (using current high performance computers) starting from this 'good' geometry. Unfortunately, the geometry usually transmitted from the CAD system is not 'good' in the grid generator sense. The grid generator needs smooth closed solid geometry. It can take a week (or more) of interaction with the CAD output (sometimes by hand) before the process can begin. One way Communication. (3) One-way Communication -- All information travels on from one phase to the next. This makes procedures like node adaptation difficult when attempting to add or move nodes that sit on bounding surfaces (when the actual surface data has been lost after the grid generation phase). Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive. There is also no way to easily deal with this system in a modular manner. One can only replace the grid generator, for example, if the software reads and writes the same files. Instead of the serial approach to analysis as described above, CAPRI takes a geometry centric approach. This makes the actual geometry (not a discretized version) accessible to all phases of the analysis. The connection to the geometry is made through an Application Programming Interface (API) and NOT a file system. This API isolates the top-level applications (grid generators, solvers and visualization components) from the geometry engine. Also this allows the replacement of one geometry kernel with another, without effecting these top-level applications. For example, if UniGraphics is used as the CAD package then Parasolid (UG's own geometry engine) can be used for all geometric queries so that no solid geometry information is lost in a translation. This is much better than STEP because when the data is queried, the same software is executed as used in the CAD system. Therefore, one analyzes the exact part that is in the CAD system. CAPRI uses the same idea as the commercial structural analysis codes but does not specify control. Software components of the CAD system are used, but the analysis suite, not the CAD operator, specifies the control of the software session. This also means that the license issues (may be) minimized and individuals need not have to know how to operate a CAD system in order to run the suite.

  15. Phast4Windows: A 3D graphical user interface for the reactive-transport simulator PHAST

    USGS Publications Warehouse

    Charlton, Scott R.; Parkhurst, David L.

    2013-01-01

    Phast4Windows is a Windows® program for developing and running groundwater-flow and reactive-transport models with the PHAST simulator. This graphical user interface allows definition of grid-independent spatial distributions of model properties—the porous media properties, the initial head and chemistry conditions, boundary conditions, and locations of wells, rivers, drains, and accounting zones—and other parameters necessary for a simulation. Spatial data can be defined without reference to a grid by drawing, by point-by-point definitions, or by importing files, including ArcInfo® shape and raster files. All definitions can be inspected, edited, deleted, moved, copied, and switched from hidden to visible through the data tree of the interface. Model features are visualized in the main panel of the interface, so that it is possible to zoom, pan, and rotate features in three dimensions (3D). PHAST simulates single phase, constant density, saturated groundwater flow under confined or unconfined conditions. Reactions among multiple solutes include mineral equilibria, cation exchange, surface complexation, solid solutions, and general kinetic reactions. The interface can be used to develop and run simple or complex models, and is ideal for use in the classroom, for analysis of laboratory column experiments, and for development of field-scale simulations of geochemical processes and contaminant transport.

  16. Technology Benefit Estimator (T/BEST): User's Manual

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Chamis, Christos C.; Abumeri, Galib

    1994-01-01

    The Technology Benefit Estimator (T/BEST) system is a formal method to assess advanced technologies and quantify the benefit contributions for prioritization. T/BEST may be used to provide guidelines to identify and prioritize high payoff research areas, help manage research and limited resources, show the link between advanced concepts and the bottom line, i.e., accrued benefit and value, and to communicate credibly the benefits of research. The T/BEST software computer program is specifically designed to estimating benefits, and benefit sensitivities, of introducing new technologies into existing propulsion systems. Key engine cycle, structural, fluid, mission and cost analysis modules are used to provide a framework for interfacing with advanced technologies. An open-ended, modular approach is used to allow for modification and addition of both key and advanced technology modules. T/BEST has a hierarchical framework that yields varying levels of benefit estimation accuracy that are dependent on the degree of input detail available. This hierarchical feature permits rapid estimation of technology benefits even when the technology is at the conceptual stage. As knowledge of the technology details increases the accuracy of the benefit analysis increases. Included in T/BEST's framework are correlations developed from a statistical data base that is relied upon if there is insufficient information given in a particular area, e.g., fuel capacity or aircraft landing weight. Statistical predictions are not required if these data are specified in the mission requirements. The engine cycle, structural fluid, cost, noise, and emissions analyses interact with the default or user material and component libraries to yield estimates of specific global benefits: range, speed, thrust, capacity, component life, noise, emissions, specific fuel consumption, component and engine weights, pre-certification test, mission performance engine cost, direct operating cost, life cycle cost, manufacturing cost, development cost, risk, and development time. Currently, T/BEST operates on stand-alone or networked workstations, and uses a UNIX shell or script to control the operation of interfaced FORTRAN based analyses. T/BEST's interface structure works equally well with non-FORTRAN or mixed software analysis. This interface structure is designed to maintain the integrity of the expert's analyses by interfacing with expert's existing input and output files. Parameter input and output data (e.g., number of blades, hub diameters, etc.) are passed via T/BEST's neutral file, while copious data (e.g., finite element models, profiles, etc.) are passed via file pointers that point to the expert's analyses output files. In order to make the communications between the T/BEST's neutral file and attached analyses codes simple, only two software commands, PUT and GET, are required. This simplicity permits easy access to all input and output variables contained within the neutral file. Both public domain and proprietary analyses codes may be attached with a minimal amount of effort, while maintaining full data and analysis integrity, and security. T/BESt's sotware framework, status, beginner-to-expert operation, interface architecture, analysis module addition, and key analysis modules are discussed. Representative examples of T/BEST benefit analyses are shown.

  17. Technology Benefit Estimator (T/BEST): User's manual

    NASA Astrophysics Data System (ADS)

    Generazio, Edward R.; Chamis, Christos C.; Abumeri, Galib

    1994-12-01

    The Technology Benefit Estimator (T/BEST) system is a formal method to assess advanced technologies and quantify the benefit contributions for prioritization. T/BEST may be used to provide guidelines to identify and prioritize high payoff research areas, help manage research and limited resources, show the link between advanced concepts and the bottom line, i.e., accrued benefit and value, and to communicate credibly the benefits of research. The T/BEST software computer program is specifically designed to estimating benefits, and benefit sensitivities, of introducing new technologies into existing propulsion systems. Key engine cycle, structural, fluid, mission and cost analysis modules are used to provide a framework for interfacing with advanced technologies. An open-ended, modular approach is used to allow for modification and addition of both key and advanced technology modules. T/BEST has a hierarchical framework that yields varying levels of benefit estimation accuracy that are dependent on the degree of input detail available. This hierarchical feature permits rapid estimation of technology benefits even when the technology is at the conceptual stage. As knowledge of the technology details increases the accuracy of the benefit analysis increases. Included in T/BEST's framework are correlations developed from a statistical data base that is relied upon if there is insufficient information given in a particular area, e.g., fuel capacity or aircraft landing weight. Statistical predictions are not required if these data are specified in the mission requirements. The engine cycle, structural fluid, cost, noise, and emissions analyses interact with the default or user material and component libraries to yield estimates of specific global benefits: range, speed, thrust, capacity, component life, noise, emissions, specific fuel consumption, component and engine weights, pre-certification test, mission performance engine cost, direct operating cost, life cycle cost, manufacturing cost, development cost, risk, and development time. Currently, T/BEST operates on stand-alone or networked workstations, and uses a UNIX shell or script to control the operation of interfaced FORTRAN based analyses. T/BEST's interface structure works equally well with non-FORTRAN or mixed software analysis. This interface structure is designed to maintain the integrity of the expert's analyses by interfacing with expert's existing input and output files. Parameter input and output data (e.g., number of blades, hub diameters, etc.) are passed via T/BEST's neutral file, while copious data (e.g., finite element models, profiles, etc.) are passed via file pointers that point to the expert's analyses output files. In order to make the communications between the T/BEST's neutral file and attached analyses codes simple, only two software commands, PUT and GET, are required. This simplicity permits easy access to all input and output variables contained within the neutral file. Both public domain and proprietary analyses codes may be attached with a minimal amount of effort, while maintaining full data and analysis integrity, and security.

  18. Media independent interface. Interface control document

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A Media Independent Interface (MII) is specified, using current standards in the industry. The MII is described in hierarchical fashion. At the base are IEEE/International Standards Organization (ISO) documents (standards) which describe the functionality of the software modules or layers and their interconnection. These documents describe primitives which are to transcent the MII. The intent of the MII is to provide a universal interface to one or more Media Access Contols (MACs) for the Logical Link Controller and Station Manager. This interface includes both a standardized electrical and mechanical interface and a standardized functional specification which defines the services expected from the MAC.

  19. 78 FR 44557 - Revision to Transmission Vegetation Management Reliability Standard; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-24

    ... Transmission Vegetation Management Reliability Standard; Notice of Compliance Filing Take notice that on July 12, 2013, the North American Electric Reliability Corporation (NERC), pursuant to Order No. 777 \\1... Reliability Standard FAC-003-2 to its Web site. \\1\\ Revisions to Reliability Standard for Transmission...

  20. 78 FR 55249 - Transmission Relay Loadability Reliability Standard; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-10

    ...; RM11-16-000] Transmission Relay Loadability Reliability Standard; Notice of Compliance Filing Take.... \\1\\ Transmission Relay Loadability Reliability Standard, Order No. 733, 130 FERC ] 61, 221 (2010..., Order No. 733-B, 136 FERC ] 61,185 (2011). \\2\\ Transmission Relay Loadability Reliability Standard, 138...

  1. 78 FR 21929 - Transmission Relay Loadability Reliability Standard; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-12

    ... Relay Loadability Reliability Standard; Notice of Compliance Filing Take notice that on February 19... Relay Loadability Reliability Standard, Order No. 733, 130 FERC ] 61,221 (2010) (Order No. 733); order..., 136 FERC ] 61,185 (2011). \\2\\ Transmission Relay Loadability Reliability Standard, 138 FERC ] 61,197...

  2. SEGY to ASCII Conversion and Plotting Program 2.0

    USGS Publications Warehouse

    Goldman, Mark R.

    2005-01-01

    INTRODUCTION SEGY has long been a standard format for storing seismic data and header information. Almost every seismic processing package can read and write seismic data in SEGY format. In the data processing world, however, ASCII format is the 'universal' standard format. Very few general-purpose plotting or computation programs will accept data in SEGY format. The software presented in this report, referred to as SEGY to ASCII (SAC), converts seismic data written in SEGY format (Barry et al., 1975) to an ASCII data file, and then creates a postscript file of the seismic data using a general plotting package (GMT, Wessel and Smith, 1995). The resulting postscript file may be plotted by any standard postscript plotting program. There are two versions of SAC: one version for plotting a SEGY file that contains a single gather, such as a stacked CDP or migrated section, and a second version for plotting multiple gathers from a SEGY file containing more than one gather, such as a collection of shot gathers. Note that if a SEGY file has multiple gathers, then each gather must have the same number of traces per gather, and each trace must have the same sample interval and number of samples per trace. SAC will read several common standards of SEGY data, including SEGY files with sample values written in either IBM or IEEE floating-point format. In addition, utility programs are present to convert non-standard Seismic Unix (.sux) SEGY files and PASSCAL (.rsy) SEGY files to standard SEGY files. SAC allows complete user control over all plotting parameters including label size and font, tick mark intervals, trace scaling, and the inclusion of a title and descriptive text. SAC shell scripts create a postscript image of the seismic data in vector rather than bitmap format, using GMT's pswiggle command. Although this can produce a very large postscript file, the image quality is generally superior to that of a bitmap image, and commercial programs such as Adobe Illustrator? can manipulate the image more efficiently.

  3. The new version 2.12 of BKG Ntrip Client (BNC)

    NASA Astrophysics Data System (ADS)

    Stürze, Andrea; Mervart, Leos; Weber, Georg; Rülke, Axel; Wiesensarter, Erwin; Neumaier, Peter

    2016-04-01

    A new version of the BKG Ntrip Client (BNC) has been released. Originally developed in cooperation of the Federal Agency for Cartography and Geodesy (BKG) and the Czech Technical University (CTU) with a focus on multi-stream real-time access to GPS observations, the software has once again been substantially extended. Promoting Open Standards as recommended by the Radio Technical Commission for Maritime Services (RTCM) remains the prime subject. Beside its Graphical User Interface (GUI), the real-time software for Windows, Linux, Mac, and Linux platforms now comes with complete Command Line Interface (CLI) and considerable post processing functionality. RINEX Version 3 file editing & Quality Check (QC) with full support of Galileo, BeiDou, and SBAS - besides GPS and GLONASS - is part of the new features. Comparison of satellite orbit/clock files in SP3 format is another fresh ability of BNC. Simultaneous multi-station Precise Point Positioning (PPP) for real-time displacement-monitoring of entire reference station networks is one more recent addition to BNC. Implemented RTCM messages for PPP (under development) comprise satellite orbit and clock corrections, code and phase observation biases, and the Vertical Total Electron Content (VTEC) of the ionosphere. The well established, mature codebase is mostly written in C++ language. Its publication under GNU GPL is thought to be well-suited for test, validation and demonstration of new approaches in precise real-time satellite navigation when IP streaming is involved. The poster highlights BNC features which are new in version 2.12 and beneficial to IAG institutions and services such as IGS/RT-IGS and to the interested public in general.

  4. Mars Express Forward Link Capabilities for the Mars Relay Operations Service (MaROS)

    NASA Technical Reports Server (NTRS)

    Allard, Daniel A.; Wallick, Michael N.; Gladden, Roy E.; Wang, Paul

    2012-01-01

    This software provides a new capability for landed Mars assets to perform forward link relay through the Mars Express (MEX) European Union orbital spacecraft. It solves the problem of standardizing the relay interface between lander missions and MEX. The Mars Operations Relay Service (MaROS) is intended as a central point for relay planning and post-pass analysis for all Mars landed and orbital assets. Through the first two phases of implementation, MaROS supports relay coordination through the Odyssey orbiter and the Mars Reconnaissance Orbiter (MRO). With this new software, MaROS now fully integrates the Mars Express spacecraft into the relay picture. This new software generates and manages a new set of file formats that allows for relay request to MEX for forward and return link relay, including the parameters specific to MEX. Existing MEX relay planning interactions were performed via email exchanges and point-to-point file transfers. By integrating MEX into MaROS, all transactions are managed by a centralized service for tracking and analysis. Additionally, all lander missions have a single, shared interface with MEX and do not have to integrate on a mission-by mission basis. Relay is a critical element of Mars lander data management. Landed assets depend largely upon orbital relay for data delivery, which can be impacted by the availability and health of each orbiter in the network. At any time, an issue may occur to prevent relay. For this reason, it is imperative that all possible orbital assets be integrated into the overall relay picture.

  5. High-performance web viewer for cardiac images

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sergio S.

    2004-04-01

    With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.

  6. Training system for digital mammographic diagnoses of breast cancer

    NASA Astrophysics Data System (ADS)

    Thomaz, R. L.; Nirschl Crozara, M. G.; Patrocinio, A. C.

    2013-03-01

    As the technology evolves, the analog mammography systems are being replaced by digital systems. The digital system uses video monitors as the display of mammographic images instead of the previously used screen-film and negatoscope for analog images. The change in the way of visualizing mammographic images may require a different approach for training the health care professionals in diagnosing the breast cancer with digital mammography. Thus, this paper presents a computational approach to train the health care professionals providing a smooth transition between analog and digital technology also training to use the advantages of digital image processing tools to diagnose the breast cancer. This computational approach consists of a software where is possible to open, process and diagnose a full mammogram case from a database, which has the digital images of each of the mammographic views. The software communicates with a gold standard digital mammogram cases database. This database contains the digital images in Tagged Image File Format (TIFF) and the respective diagnoses according to BI-RADSTM, these files are read by software and shown to the user as needed. There are also some digital image processing tools that can be used to provide better visualization of each single image. The software was built based on a minimalist and a user-friendly interface concept that might help in the smooth transition. It also has an interface for inputting diagnoses from the professional being trained, providing a result feedback. This system has been already completed, but hasn't been applied to any professional training yet.

  7. Spatial Data Transfer Standard (SDTS), part 3 : ISO 8211 encoding

    DOT National Transportation Integrated Search

    1997-11-20

    The ISO 8211 encoding provides a representation of a Spatial Data Transfer Standard (SDTS) file set in a standardized method enabling the file set to be exported to or imported from different media by general purpose ISO 8211 software.

  8. Analyzing Spacecraft Telecommunication Systems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric

    2004-01-01

    Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.

  9. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  10. EXODUS II: A finite element data model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  11. Mars Reconnaissance Orbiter Uplink Analysis Tool

    NASA Technical Reports Server (NTRS)

    Khanampompan, Teerapat; Gladden, Roy; Fisher, Forest; Hwang, Pauline

    2008-01-01

    This software analyzes Mars Reconnaissance Orbiter (MRO) orbital geometry with respect to Mars Exploration Rover (MER) contact windows, and is the first tool of its kind designed specifically to support MRO-MER interface coordination. Prior to this automated tool, this analysis was done manually with Excel and the UNIX command line. In total, the process would take approximately 30 minutes for each analysis. The current automated analysis takes less than 30 seconds. This tool resides on the flight machine and uses a PHP interface that does the entire analysis of the input files and takes into account one-way light time from another input file. Input flies are copied over to the proper directories and are dynamically read into the tool s interface. The user can then choose the corresponding input files based on the time frame desired for analysis. After submission of the Web form, the tool merges the two files into a single, time-ordered listing of events for both spacecraft. The times are converted to the same reference time (Earth Transmit Time) by reading in a light time file and performing the calculations necessary to shift the time formats. The program also has the ability to vary the size of the keep-out window on the main page of the analysis tool by inputting a custom time for padding each MRO event time. The parameters on the form are read in and passed to the second page for analysis. Everything is fully coded in PHP and can be accessed by anyone with access to the machine via Web page. This uplink tool will continue to be used for the duration of the MER mission's needs for X-band uplinks. Future missions also can use the tools to check overflight times as well as potential site observation times. Adaptation of the input files to the proper format, and the window keep-out times, would allow for other analyses. Any operations task that uses the idea of keep-out windows will have a use for this program.

  12. User`s and reference guide to the INEL RML/analytical radiochemistry sample tracking database version 1.00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Femec, D.A.

    This report discusses the sample tracking database in use at the Idaho National Engineering Laboratory (INEL) by the Radiation Measurements Laboratory (RML) and Analytical Radiochemistry. The database was designed in-house to meet the specific needs of the RML and Analytical Radiochemistry. The report consists of two parts, a user`s guide and a reference guide. The user`s guide presents some of the fundamentals needed by anyone who will be using the database via its user interface. The reference guide describes the design of both the database and the user interface. Briefly mentioned in the reference guide are the code-generating tools, CREATE-SCHEMAmore » and BUILD-SCREEN, written to automatically generate code for the database and its user interface. The appendices contain the input files used by the these tools to create code for the sample tracking database. The output files generated by these tools are also included in the appendices.« less

  13. Adding Data Management Services to Parallel File Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, Scott

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file-based ecosystem; (3) common optimizations, e.g., indexing and caching, are readily supported across several file formats, avoiding effort duplication; and (4) performance improves significantly, as data processing is integrated more tightly with data storage. Our key contributions are: SciHadoop which explores changes to MapReduce assumption by taking advantage of semantics of structured data while preserving MapReduce’s failure and resource management; DataMods which extends common abstractions of parallel file systems so they become programmable such that they can be extended to natively support a variety of data models and can be hooked into emerging distributed runtimes such as Stanford’s Legion; and Miso which combines Hadoop and relational data warehousing to minimize time to insight, taking into account the overhead of ingesting data into data warehousing.« less

  14. SDAI: a key piece of software to manage the new wideband backend at Robledo

    NASA Astrophysics Data System (ADS)

    Rizzo, J. R.; Gutiérrez Bustos, M.; Kuiper, T. B. H.; Cernicharo, J.; Sotuela, I.; Pedreira, A.

    2012-09-01

    A joint collaborative project was recently developed to provide the Madrid Deep Space Communications Complex with a state-of-the-art wideband backend. This new backend provides from 100MHz to 6 GHz of instantaneous bandwidth, and spectral resolutions from 6 to 200 kHz. The backend includes a new intermediate-frequency processor, as well as a FPGA-based FFT spectrometer, which manage thousands of spectroscopic channels in real time. All these equipment need to be controlled and operated by a common software, which has to synchronize activities among affected devices, and also with the observing program. The final output should be a calibrated spectrum, readable by standard radio astronomical tools for further processing. The developed software at this end is named "Spectroscopic Data Acquisition Interface" (SDAI). SDAI is written in python 2.5, using PyQt4 for the User Interface. By an ethernet socket connection, SDAI receives astronomical information (source, frequencies, Doppler correction, etc.) and the antenna status from the observing program. Then it synchronizes the observations at the required frequency by tuning the synthesizers through their USB ports; finally SDAI controls the FFT spectrometers through UDP commands sent by sockets. Data are transmitted from the FFT spectrometers by TCP sockets, and written as standard FITS files. In this paper we describe the modules built, depict a typical observing session, and show some astronomical results using SDAI.

  15. Porting and redesign of Geotool software system to Qt

    NASA Astrophysics Data System (ADS)

    Miljanovic Tamarit, V.; Carneiro, L.; Henson, I. H.; Tomuta, E.

    2016-12-01

    Geotool is a software system that allows a user to interactively display and process seismoacoustic data from International Monitoring System (IMS) station. Geotool can be used to perform a number of analysis and review tasks, including data I/O, waveform filtering, quality control, component rotation, amplitude and arrival measurement and review, array beamforming, correlation, Fourier analysis, FK analysis, event review and location, particle motion visualization, polarization analysis, instrument response convolution/deconvolution, real-time display, signal to noise measurement, spectrogram, and travel time model display. The Geotool program was originally written in C using the X11/Xt/Motif libraries for graphics. It was later ported to C++. Now the program is being ported to the Qt graphics system to be more compatible with the other software in the International Data Centre (IDC). Along with this port, a redesign of the architecture is underway to achieve a separation between user interface, control, and data model elements, in line with design patterns such as Model-View-Controller. Qt is a cross-platform application framework that will allow geotool to easily run on Linux, Mac, and Windows. The Qt environment includes modern libraries and user interfaces for standard utilities such as file and database access, printing, and inter-process communications. The Qt Widgets for Technical Applications library (QWT) provides tools for displaying standard data analysis graphics.

  16. 40 CFR 60.541 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operation means the operation identified as Michelin-A in the Emission Standards and Engineering Division... identified as Michelin-B in the Emission Standards and Engineering Division confidential file as referenced... Michelin-C-automatic in the Emission Standards and Engineering Division confidential file as referenced in...

  17. 40 CFR 60.541 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operation means the operation identified as Michelin-A in the Emission Standards and Engineering Division... identified as Michelin-B in the Emission Standards and Engineering Division confidential file as referenced... Michelin-C-automatic in the Emission Standards and Engineering Division confidential file as referenced in...

  18. 40 CFR 60.541 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operation means the operation identified as Michelin-A in the Emission Standards and Engineering Division... identified as Michelin-B in the Emission Standards and Engineering Division confidential file as referenced... Michelin-C-automatic in the Emission Standards and Engineering Division confidential file as referenced in...

  19. 40 CFR 60.541 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operation means the operation identified as Michelin-A in the Emission Standards and Engineering Division... identified as Michelin-B in the Emission Standards and Engineering Division confidential file as referenced... Michelin-C-automatic in the Emission Standards and Engineering Division confidential file as referenced in...

  20. Design Foundations for Content-Rich Acoustic Interfaces: Investigating Audemes as Referential Non-Speech Audio Cues

    ERIC Educational Resources Information Center

    Ferati, Mexhid Adem

    2012-01-01

    To access interactive systems, blind and visually impaired users can leverage their auditory senses by using non-speech sounds. The current structure of non-speech sounds, however, is geared toward conveying user interface operations (e.g., opening a file) rather than large theme-based information (e.g., a history passage) and, thus, is ill-suited…

  1. User Manual for the Data-Series Interface of the Gr Application Software

    USGS Publications Warehouse

    Donovan, John M.

    2009-01-01

    This manual describes the data-series interface for the Gr Application software. Basic tasks such as plotting, editing, manipulating, and printing data series are presented. The properties of the various types of data objects and graphical objects used within the application, and the relationships between them also are presented. Descriptions of compatible data-series file formats are provided.

  2. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard, Implementation... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...

  3. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...

  4. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...

  5. 77 FR 74033 - Standard Mail Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-12

    ..., and a public Excel file. Attachment 1 is an application for non-public treatment of material filed... public Excel file contains redacted financial documentation. Id. at 2. III. Notice of Filing and Related...

  6. iTOUGH2 Universal Optimization Using the PEST Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.

    2010-07-01

    iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2more » is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models, the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.« less

  7. IDG - INTERACTIVE DIF GENERATOR

    NASA Technical Reports Server (NTRS)

    Preheim, L. E.

    1994-01-01

    The Interactive DIF Generator (IDG) utility is a tool used to generate and manipulate Directory Interchange Format files (DIF). Its purpose as a specialized text editor is to create and update DIF files which can be sent to NASA's Master Directory, also referred to as the International Global Change Directory at Goddard. Many government and university data systems use the Master Directory to advertise the availability of research data. The IDG interface consists of a set of four windows: (1) the IDG main window; (2) a text editing window; (3) a text formatting and validation window; and (4) a file viewing window. The IDG main window starts up the other windows and contains a list of valid keywords. The keywords are loaded from a user-designated file and selected keywords can be copied into any active editing window. Once activated, the editing window designates the file to be edited. Upon switching from the editing window to the formatting and validation window, the user has options for making simple changes to one or more files such as inserting tabs, aligning fields, and indenting groups. The viewing window is a scrollable read-only window that allows fast viewing of any text file. IDG is an interactive tool and requires a mouse or a trackball to operate. IDG uses the X Window System to build and manage its interactive forms, and also uses the Motif widget set and runs under Sun UNIX. IDG is written in C-language for Sun computers running SunOS. This package requires the X Window System, Version 11 Revision 4, with OSF/Motif 1.1. IDG requires 1.8Mb of hard disk space. The standard distribution medium for IDG is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The program was developed in 1991 and is a copyrighted work with all copyright vested in NASA. SunOS is a trademark of Sun Microsystems, Inc. X Window System is a trademark of Massachusetts Institute of Technology. OSF/Motif is a trademark of the Open Software Foundation, Inc. UNIX is a trademark of Bell Laboratories.

  8. Entropy based file type identification and partitioning

    DTIC Science & Technology

    2017-06-01

    energy spectrum,” Proceedings of the Twenty-Ninth International Florida Artificial Intelligence Research Society Conference, pp. 288–293, 2016...ABBREVIATIONS AES Advanced Encryption Standard ANN Artificial Neural Network ASCII American Standard Code for Information Interchange CWT...the identification of file types and file partitioning. This approach has applications in cybersecurity as it allows for a quick determination of

  9. 15 CFR 30.5 - Electronic Export Information filing application and certification processes and standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the USPPI in the form of a power of attorney or written authorization, thus making them authorized... agencies participating in the AES postdeparture filing review process. Failure to meet the standards of the...) calendar days of receipt of the postdeparture filing application by the Census Bureau, or if a decision...

  10. The development of an imaging informatics-based multi-institutional platform to support sports performance and injury prevention in track and field

    NASA Astrophysics Data System (ADS)

    Liu, Joseph; Wang, Ximing; Verma, Sneha; McNitt-Gray, Jill; Liu, Brent

    2018-03-01

    The main goal of sports science and performance enhancement is to collect video and image data, process them, and quantify the results, giving insight to help athletes improve technique. For long jump in track and field, the processed output of video with force vector overlays and force calculations allow coaches to view specific stages of the hop, step, and jump, and identify how each stage can be improved to increase jump distance. Outputs also provide insight into how athletes can better maneuver to prevent injury. Currently, each data collection site collects and stores data with their own methods. There is no standard for data collection, formats, or storage. Video files and quantified results are stored in different formats, structures, and locations such as Dropbox and hard drives. Using imaging informatics-based principles we can develop a platform for multiple institutions that promotes the standardization of sports performance data. In addition, the system will provide user authentication and privacy as in clinical trials, with specific user access rights. Long jump data collected from different field sites will be standardized into specified formats before database storage. Quantified results from image-processing algorithms are stored similar to CAD algorithm results. The system will streamline the current sports performance data workflow and provide a user interface for athletes and coaches to view results of individual collections and also longitudinally across different collections. This streamlined platform and interface is a tool for coaches and athletes to easily access and review data to improve sports performance and prevent injury.

  11. A MATLAB-based graphical user interface program for computing functionals of the geopotential up to ultra-high degrees and orders

    NASA Astrophysics Data System (ADS)

    Bucha, Blažej; Janák, Juraj

    2013-07-01

    We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.

  12. Executing medical logic modules expressed in ArdenML using Drools

    PubMed Central

    Jung, Chai Young; Sward, Katherine A

    2011-01-01

    The Arden Syntax is an HL7 standard language for representing medical knowledge as logic statements. Despite nearly 2 decades of availability, Arden Syntax has not been widely used. This has been attributed to the lack of a generally available compiler to implement the logic, to Arden's complex syntax, to the challenges of mapping local data to data references in the Medical Logic Modules (MLMs), or, more globally, to the general absence of decision support in healthcare computing. An XML representation (ArdenML) may partially address the technical challenges. MLMs created in ArdenML can be converted into executable files using standard transforms written in the Extensible Stylesheet Language Transformation (XSLT) language. As an example, we have demonstrated an approach to executing MLMs written in ArdenML using the Drools business rule management system. Extensions to ArdenML make it possible to generate a user interface through which an MLM developer can test for logical errors. PMID:22180871

  13. IDSP- INTERACTIVE DIGITAL SIGNAL PROCESSOR

    NASA Technical Reports Server (NTRS)

    Mish, W. H.

    1994-01-01

    The Interactive Digital Signal Processor, IDSP, consists of a set of time series analysis "operators" based on the various algorithms commonly used for digital signal analysis work. The processing of a digital time series to extract information is usually achieved by the application of a number of fairly standard operations. However, it is often desirable to "experiment" with various operations and combinations of operations to explore their effect on the results. IDSP is designed to provide an interactive and easy-to-use system for this type of digital time series analysis. The IDSP operators can be applied in any sensible order (even recursively), and can be applied to single time series or to simultaneous time series. IDSP is being used extensively to process data obtained from scientific instruments onboard spacecraft. It is also an excellent teaching tool for demonstrating the application of time series operators to artificially-generated signals. IDSP currently includes over 43 standard operators. Processing operators provide for Fourier transformation operations, design and application of digital filters, and Eigenvalue analysis. Additional support operators provide for data editing, display of information, graphical output, and batch operation. User-developed operators can be easily interfaced with the system to provide for expansion and experimentation. Each operator application generates one or more output files from an input file. The processing of a file can involve many operators in a complex application. IDSP maintains historical information as an integral part of each file so that the user can display the operator history of the file at any time during an interactive analysis. IDSP is written in VAX FORTRAN 77 for interactive or batch execution and has been implemented on a DEC VAX-11/780 operating under VMS. The IDSP system generates graphics output for a variety of graphics systems. The program requires the use of Versaplot and Template plotting routines and IMSL Math/Library routines. These software packages are not included in IDSP. The virtual memory requirement for the program is approximately 2.36 MB. The IDSP system was developed in 1982 and was last updated in 1986. Versaplot is a registered trademark of Versatec Inc. Template is a registered trademark of Template Graphics Software Inc. IMSL Math/Library is a registered trademark of IMSL Inc.

  14. Accessing Data Federations with CVMFS

    DOE PAGES

    Weitzel, Derek; Bockelman, Brian; Dykstra, Dave; ...

    2017-11-23

    Data federations have become an increasingly common tool for large collaborations such as CMS and Atlas to efficiently distribute large data files. Unfortunately, these typically are implemented with weak namespace semantics and a non-POSIX API. On the other hand, CVMFS has provided a POSIX-compliant read-only interface for use cases with a small working set size (such as software distribution). The metadata required for the CVMFS POSIX interface is distributed through a caching hierarchy, allowing it to scale to the level of about a hundred thousand hosts. In this paper, we will describe our contributions to CVMFS that merges the datamore » scalability of XRootD-based data federations (such as AAA) with metadata scalability and POSIX interface of CVMFS. We modified CVMFS so it can serve unmodified files without copying them to the repository server. CVMFS 2.2.0 is also able to redirect requests for data files to servers outside of the CVMFS content distribution network. Finally, we added the ability to manage authorization and authentication using security credentials such as X509 proxy certificates. We combined these modifications with the OSGs StashCache regional XRootD caching infrastructure to create a cached data distribution network. Here, we will show performance metrics accessing the data federation through CVMFS compared to direct data federation access. Additionally, we will discuss the improved user experience of providing access to a data federation through a POSIX filesystem.« less

  15. Accessing Data Federations with CVMFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weitzel, Derek; Bockelman, Brian; Dykstra, Dave

    Data federations have become an increasingly common tool for large collaborations such as CMS and Atlas to efficiently distribute large data files. Unfortunately, these typically are implemented with weak namespace semantics and a non-POSIX API. On the other hand, CVMFS has provided a POSIX-compliant read-only interface for use cases with a small working set size (such as software distribution). The metadata required for the CVMFS POSIX interface is distributed through a caching hierarchy, allowing it to scale to the level of about a hundred thousand hosts. In this paper, we will describe our contributions to CVMFS that merges the datamore » scalability of XRootD-based data federations (such as AAA) with metadata scalability and POSIX interface of CVMFS. We modified CVMFS so it can serve unmodified files without copying them to the repository server. CVMFS 2.2.0 is also able to redirect requests for data files to servers outside of the CVMFS content distribution network. Finally, we added the ability to manage authorization and authentication using security credentials such as X509 proxy certificates. We combined these modifications with the OSGs StashCache regional XRootD caching infrastructure to create a cached data distribution network. Here, we will show performance metrics accessing the data federation through CVMFS compared to direct data federation access. Additionally, we will discuss the improved user experience of providing access to a data federation through a POSIX filesystem.« less

  16. Accessing Data Federations with CVMFS

    NASA Astrophysics Data System (ADS)

    Weitzel, Derek; Bockelman, Brian; Dykstra, Dave; Blomer, Jakob; Meusel, Ren

    2017-10-01

    Data federations have become an increasingly common tool for large collaborations such as CMS and Atlas to efficiently distribute large data files. Unfortunately, these typically are implemented with weak namespace semantics and a non-POSIX API. On the other hand, CVMFS has provided a POSIX-compliant read-only interface for use cases with a small working set size (such as software distribution). The metadata required for the CVMFS POSIX interface is distributed through a caching hierarchy, allowing it to scale to the level of about a hundred thousand hosts. In this paper, we will describe our contributions to CVMFS that merges the data scalability of XRootD-based data federations (such as AAA) with metadata scalability and POSIX interface of CVMFS. We modified CVMFS so it can serve unmodified files without copying them to the repository server. CVMFS 2.2.0 is also able to redirect requests for data files to servers outside of the CVMFS content distribution network. Finally, we added the ability to manage authorization and authentication using security credentials such as X509 proxy certificates. We combined these modifications with the OSGs StashCache regional XRootD caching infrastructure to create a cached data distribution network. We will show performance metrics accessing the data federation through CVMFS compared to direct data federation access. Additionally, we will discuss the improved user experience of providing access to a data federation through a POSIX filesystem.

  17. Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.

    PubMed

    Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory

    2016-06-13

    Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are first used with a bioinformatics system. Simplifying the validation of essential tabular data files, such as sample metadata, will reduce common errors and thereby improve the quality and reliability of research outcomes.

  18. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study

    PubMed Central

    2018-01-01

    Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962

  19. 42 CFR 423.160 - Standards for electronic prescribing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide, Version 8, Release 1, (Version 8.1... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide, Version 8, Release 1 (Version 8.1... Programs Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadayappan, Ponnuswamy

    Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. We propose a new approach to the data and work distribution model provided by system software based on the unifying formalism of an abstract file system. The proposed hierarchical data model providesmore » simple, familiar visibility and access to data structures through the file system hierarchy, while providing fault tolerance through selective redundancy. The hierarchical task model features work queues whose form and organization are represented as file system objects. Data and work are both first class entities. By exposing the relationships between data and work to the runtime system, information is available to optimize execution time and provide fault tolerance. The data distribution scheme provides replication (where desirable and possible) for fault tolerance and efficiency, and it is hierarchical to make it possible to take advantage of locality. The user, tools, and applications, including legacy applications, can interface with the data, work queues, and one another through the abstract file model. This runtime environment will provide multiple interfaces to support traditional Message Passing Interface applications, languages developed under DARPA's High Productivity Computing Systems program, as well as other, experimental programming models. We will validate our runtime system with pilot codes on existing platforms and will use simulation to validate for exascale-class platforms. In this final report, we summarize research results from the work done at the Ohio State University towards the larger goals of the project listed above.« less

  1. A computer program for obtaining airplane configuration plots from digital Datcom input data

    NASA Technical Reports Server (NTRS)

    Roy, M. L.; Sliwa, S. M.

    1983-01-01

    A computer program is described which reads the input file for the Stability and Control Digital Datcom program and generates plots from the aircraft configuration data. These plots can be used to verify the geometric input data to the Digital Datcom program. The program described interfaces with utilities available for plotting aircraft configurations by creating a file from the Digital Datcom input data.

  2. Detailed Design and Implementation of a Multiprogramming Operating System for Sixteen-Bit Microprocessors.

    DTIC Science & Technology

    1983-12-01

    4 Multiuser Support ...... .......... 11-5 User Interface . .. .. ................ .. 11- 7 Inter -user Communications ................ 11- 7 Memory...user will greatly help facilitate the learning process. Inter -User Communication The inter -user communications of the operating system can be done using... inter -user communications would be met by using one or both of them. AMemory and File Management Memory and file management is concerned with four basic

  3. Secure Display of Space-Exploration Images

    NASA Technical Reports Server (NTRS)

    Cheng, Cecilia; Thornhill, Gillian; McAuley, Michael

    2006-01-01

    Java EDR Display Interface (JEDI) is software for either local display or secure Internet distribution, to authorized clients, of image data acquired from cameras aboard spacecraft engaged in exploration of remote planets. ( EDR signifies experimental data record, which, in effect, signifies image data.) Processed at NASA s Multimission Image Processing Laboratory (MIPL), the data can be from either near-realtime processing streams or stored files. JEDI uses the Java Advanced Imaging application program interface, plus input/output packages that are parts of the Video Image Communication and Retrieval software of the MIPL, to display images. JEDI can be run as either a standalone application program or within a Web browser as a servlet with an applet front end. In either operating mode, JEDI communicates using the HTTP(s) protocol(s). In the Web-browser case, the user must provide a password to gain access. For each user and/or image data type, there is a configuration file, called a "personality file," containing parameters that control the layout of the displays and the information to be included in them. Once JEDI has accepted the user s password, it processes the requested EDR (provided that user is authorized to receive the specific EDR) to create a display according to the user s personality file.

  4. MSiReader: an open-source interface to view and analyze high resolving power MS imaging files on Matlab platform.

    PubMed

    Robichaud, Guillaume; Garrard, Kenneth P; Barry, Jeremy A; Muddiman, David C

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  5. The eGenVar data management system—cataloguing and sharing sensitive data and metadata for the life sciences

    PubMed Central

    Razick, Sabry; Močnik, Rok; Thomas, Laurent F.; Ryeng, Einar; Drabløs, Finn; Sætrom, Pål

    2014-01-01

    Systematic data management and controlled data sharing aim at increasing reproducibility, reducing redundancy in work, and providing a way to efficiently locate complementing or contradicting information. One method of achieving this is collecting data in a central repository or in a location that is part of a federated system and providing interfaces to the data. However, certain data, such as data from biobanks or clinical studies, may, for legal and privacy reasons, often not be stored in public repositories. Instead, we describe a metadata cataloguing system and a software suite for reporting the presence of data from the life sciences domain. The system stores three types of metadata: file information, file provenance and data lineage, and content descriptions. Our software suite includes both graphical and command line interfaces that allow users to report and tag files with these different metadata types. Importantly, the files remain in their original locations with their existing access-control mechanisms in place, while our system provides descriptions of their contents and relationships. Our system and software suite thereby provide a common framework for cataloguing and sharing both public and private data. Database URL: http://bigr.medisin.ntnu.no/data/eGenVar/ PMID:24682735

  6. MSiReader: An Open-Source Interface to View and Analyze High Resolving Power MS Imaging Files on Matlab Platform

    NASA Astrophysics Data System (ADS)

    Robichaud, Guillaume; Garrard, Kenneth P.; Barry, Jeremy A.; Muddiman, David C.

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  7. Advanced Data Format (ADF) Software Library and Users Guide

    NASA Technical Reports Server (NTRS)

    Smith, Matthew; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial. Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its 1/0 software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The ADF is a generic database manager with minimal intrinsic capability. It was written for the purpose of storing large numerical datasets in an efficient, platform independent manner. To be effective, it must be used in conjunction with external agreements on how the data will be organized within the ADF database such defined by the SIDS. There are currently 34 user callable functions that comprise the ADF Core library and are described in the Users Guide. The library is written in C, but each function has a FORTRAN counterpart.

  8. Integration of DICOM and openEHR standards

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Yao, Zhihong; Liu, Lei

    2011-03-01

    The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.

  9. Chandra Source Catalog: User Interfaces

    NASA Astrophysics Data System (ADS)

    Bonaventura, Nina; Evans, I. N.; Harbo, P. N.; Rots, A. H.; Tibbetts, M. S.; Van Stone, D. W.; Zografou, P.; Anderson, C. S.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Glotfelty, K. J.; Grier, J. D.; Hain, R.; Hall, D. M.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Winkelman, S. L.

    2010-03-01

    The CSCview data mining interface is available for browsing the Chandra Source Catalog (CSC) and downloading tables of quality-assured source properties and data products. Once the desired source properties and search criteria are entered into the CSCview query form, the resulting source matches are returned in a table along with the values of the requested source properties for each source. (The catalog can be searched on any source property, not just position.) At this point, the table of search results may be saved to a text file, and the available data products for each source may be downloaded. CSCview save files are output in RDB-like and VOTable format. The available CSC data products include event files, spectra, lightcurves, and images, all of which are processed with the CIAO software. CSC data may also be accessed non-interactively with Unix command-line tools such as cURL and Wget, using ADQL 2.0 query syntax. In fact, CSCview features a separate ADQL query form for those who wish to specify this type of query within the GUI. Several interfaces are available for learning if a source is included in the catalog (in addition to CSCview): 1) the CSC interface to Sky in Google Earth shows the footprint of each Chandra observation on the sky, along with the CSC footprint for comparison (CSC source properties are also accessible when a source within a Chandra field-of-view is clicked); 2) the CSC Limiting Sensitivity online tool indicates if a source at an input celestial location was too faint for detection; 3) an IVOA Simple Cone Search interface locates all CSC sources within a specified radius of an R.A. and Dec.; and 4) the CSC-SDSS cross-match service returns the list of sources common to the CSC and SDSS, either all such sources or a subset based on search criteria.

  10. Supporting geoscience with graphical-user-interface Internet tools for the Macintosh

    NASA Astrophysics Data System (ADS)

    Robin, Bernard

    1995-07-01

    This paper describes a suite of Macintosh graphical-user-interface (GUI) software programs that can be used in conjunction with the Internet to support geoscience education. These software programs allow science educators to access and retrieve a large body of resources from an increasing number of network sites, taking advantage of the intuitive, simple-to-use Macintosh operating system. With these tools, educators easily can locate, download, and exchange not only text files but also sound resources, video movie clips, and software application files from their desktop computers. Another major advantage of these software tools is that they are available at no cost and may be distributed freely. The following GUI software tools are described including examples of how they can be used in an educational setting: ∗ Eudora—an e-mail program ∗ NewsWatcher—a newsreader ∗ TurboGopher—a Gopher program ∗ Fetch—a software application for easy File Transfer Protocol (FTP) ∗ NCSA Mosaic—a worldwide hypertext browsing program. An explosive growth of online archives currently is underway as new electronic sites are being added continuously to the Internet. Many of these resources may be of interest to science educators who learn they can share not only ASCII text files, but also graphic image files, sound resources, QuickTime movie clips, and hypermedia projects with colleagues from locations around the world. These powerful, yet simple to learn GUI software tools are providing a revolution in how knowledge can be accessed, retrieved, and shared.

  11. Phast4Windows: a 3D graphical user interface for the reactive-transport simulator PHAST.

    PubMed

    Charlton, Scott R; Parkhurst, David L

    2013-01-01

    Phast4Windows is a Windows® program for developing and running groundwater-flow and reactive-transport models with the PHAST simulator. This graphical user interface allows definition of grid-independent spatial distributions of model properties-the porous media properties, the initial head and chemistry conditions, boundary conditions, and locations of wells, rivers, drains, and accounting zones-and other parameters necessary for a simulation. Spatial data can be defined without reference to a grid by drawing, by point-by-point definitions, or by importing files, including ArcInfo® shape and raster files. All definitions can be inspected, edited, deleted, moved, copied, and switched from hidden to visible through the data tree of the interface. Model features are visualized in the main panel of the interface, so that it is possible to zoom, pan, and rotate features in three dimensions (3D). PHAST simulates single phase, constant density, saturated groundwater flow under confined or unconfined conditions. Reactions among multiple solutes include mineral equilibria, cation exchange, surface complexation, solid solutions, and general kinetic reactions. The interface can be used to develop and run simple or complex models, and is ideal for use in the classroom, for analysis of laboratory column experiments, and for development of field-scale simulations of geochemical processes and contaminant transport. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.

  12. [PVFS 2000: An operational parallel file system for Beowulf

    NASA Technical Reports Server (NTRS)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  13. Implementing MANETS in Android based environment using Wi-Fi direct

    NASA Astrophysics Data System (ADS)

    Waqas, Muhammad; Babar, Mohammad Inayatullah Khan; Zafar, Mohammad Haseeb

    2015-05-01

    Packet loss occurs in real-time voice transmission over wireless broadcast Ad-hoc network which creates disruptions in sound. Basic objective of this research is to design a wireless Ad-hoc network based on two Android devices by using the Wireless Fidelity (WIFI) Direct Application Programming Interface (API) and apply the Network Codec, Reed Solomon Code. The network codec is used to encode the data of a music wav file and recover the lost packets if any, packets are dropped using a loss module at the transmitter device to analyze the performance with the objective of retrieving the original file at the receiver device using the network codec. This resulted in faster transmission of the files despite dropped packets. In the end both files had the original formatted music files with complete performance analysis based on the transmission delay.

  14. Design of a steganographic virtual operating system

    NASA Astrophysics Data System (ADS)

    Ashendorf, Elan; Craver, Scott

    2015-03-01

    A steganographic file system is a secure file system whose very existence on a disk is concealed. Customarily, these systems hide an encrypted volume within unused disk blocks, slack space, or atop conventional encrypted volumes. These file systems are far from undetectable, however: aside from their ciphertext footprint, they require a software or driver installation whose presence can attract attention and then targeted surveillance. We describe a new steganographic operating environment that requires no visible software installation, launching instead from a concealed bootstrap program that can be extracted and invoked with a chain of common Unix commands. Our system conceals its payload within innocuous files that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a file system, user interface and applications through a web architecture.

  15. 4 CFR 21.5 - Protest issues not for consideration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... official to file a protest or not to file a protest in connection with a public-private competition. [61 FR... business size standards and North American Industry Classification System (NAICS) standards. Challenges of established size standards or the size status of particular firms, and challenges of the selected NAICS code...

  16. User’s guide for MapMark4GUI—A graphical user interface for the MapMark4 R package

    USGS Publications Warehouse

    Shapiro, Jason

    2018-05-29

    MapMark4GUI is an R graphical user interface (GUI) developed by the U.S. Geological Survey to support user implementation of the MapMark4 R statistical software package. MapMark4 was developed by the U.S. Geological Survey to implement probability calculations for simulating undiscovered mineral resources in quantitative mineral resource assessments. The GUI provides an easy-to-use tool to input data, run simulations, and format output results for the MapMark4 package. The GUI is written and accessed in the R statistical programming language. This user’s guide includes instructions on installing and running MapMark4GUI and descriptions of the statistical output processes, output files, and test data files.

  17. On the symbolic manipulation and code generation for elasto-plastic material matrices

    NASA Technical Reports Server (NTRS)

    Chang, T. Y.; Saleeb, A. F.; Wang, P. S.; Tan, H. Q.

    1991-01-01

    A computerized procedure for symbolic manipulations and FORTRAN code generation of an elasto-plastic material matrix for finite element applications is presented. Special emphasis is placed on expression simplifications during intermediate derivations, optimal code generation, and interface with the main program. A systematic procedure is outlined to avoid redundant algebraic manipulations. Symbolic expressions of the derived material stiffness matrix are automatically converted to RATFOR code which is then translated into FORTRAN statements through a preprocessor. To minimize the interface problem with the main program, a template file is prepared so that the translated FORTRAN statements can be merged into the file to form a subroutine (or a submodule). Three constitutive models; namely, von Mises plasticity, Drucker-Prager model, and a concrete plasticity model, are used as illustrative examples.

  18. Simulation Control Graphical User Interface Logging Report

    NASA Technical Reports Server (NTRS)

    Hewling, Karl B., Jr.

    2012-01-01

    One of the many tasks of my project was to revise the code of the Simulation Control Graphical User Interface (SIM GUI) to enable logging functionality to a file. I was also tasked with developing a script that directed the startup and initialization flow of the various LCS software components. This makes sure that a software component will not spin up until all the appropriate dependencies have been configured properly. Also I was able to assist hardware modelers in verifying the configuration of models after they have been upgraded to a new software version. I developed some code that analyzes the MDL files to determine if any error were generated due to the upgrade process. Another one of the projects assigned to me was supporting the End-to-End Hardware/Software Daily Tag-up meeting.

  19. An implementation of the programming structural synthesis system (PROSSS)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1981-01-01

    A particular implementation of the programming structural synthesis system (PROSSS) is described. This software system combines a state of the art optimization program, a production level structural analysis program, and user supplied, problem dependent interface programs. These programs are combined using standard command language features existing in modern computer operating systems. PROSSS is explained in general with respect to this implementation along with the steps for the preparation of the programs and input data. Each component of the system is described in detail with annotated listings for clarification. The components include options, procedures, programs and subroutines, and data files as they pertain to this implementation. An example exercising each option in this implementation to allow the user to anticipate the type of results that might be expected is presented.

  20. Tapir: A web interface for transit/eclipse observability

    NASA Astrophysics Data System (ADS)

    Jensen, Eric

    2013-06-01

    Tapir is a set of tools, written in Perl, that provides a web interface for showing the observability of periodic astronomical events, such as exoplanet transits or eclipsing binaries. The package provides tools for creating finding charts for each target and airmass plots for each event. The code can access target lists that are stored on-line in a Google spreadsheet or in a local text file.

  1. PC-based Multiple Information System Interface (PC/MISI) detailed design and implementation plan

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Hall, Philip P.

    1985-01-01

    The design plan for the personal computer multiple information system interface (PC/MISI) project is discussed. The document is intended to be used as a blueprint for the implementation of the system. Each component is described in the detail necessary to allow programmers to implement the system. A description of the system data flow and system file structures is given.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi

    Many simulation software produce data in the form of a set of field values or of a set of particle positions. (one such example is that of particle-in-cell codes, which produce data on the electromagnetic fields that they simulate.) However, each particular software uses its own particular format and layout, for the output data. This makes it difficult to compare the results of different simulation software, or to have a common visualization tool for these results. However, a standardized layout for fields and particles has recently been developed: the openPMD format ( HYPERLINK "http://www.openpmd.org/"www.openpmd.org) This format is open- source, andmore » specifies a standard way in which field data and particle data should be written. The openPMD format is already implemented in the particle-in-cell code Warp (developed at LBL) and in PIConGPU (developed at HZDR, Germany). In this context, the proposed software (openPMD-viewer) is a Python package, which allows to access and visualize any data which has been formatted according to the openPMD standard. This package contains two main components: - a Python API, which allows to read and extract the data from a openPMD file, so as to be able to work with it within the Python environment. (e.g. plot the data and reprocess it with particular Python functions) - a graphical interface, which works with the ipython notebook, and allows to quickly visualize the data and browse through a set of openPMD files. The proposed software will be typically used when analyzing the results of numerical simulations. It will be useful to quickly extract scientific meaning from a set of numerical data.« less

  3. Retrieving high-resolution images over the Internet from an anatomical image database

    NASA Astrophysics Data System (ADS)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  4. SeqLib: a C ++ API for rapid BAM manipulation, sequence alignment and sequence assembly

    PubMed Central

    Wala, Jeremiah; Beroukhim, Rameen

    2017-01-01

    Abstract We present SeqLib, a C ++ API and command line tool that provides a rapid and user-friendly interface to BAM/SAM/CRAM files, global sequence alignment operations and sequence assembly. Four C libraries perform core operations in SeqLib: HTSlib for BAM access, BWA-MEM and BLAT for sequence alignment and Fermi for error correction and sequence assembly. Benchmarking indicates that SeqLib has lower CPU and memory requirements than leading C ++ sequence analysis APIs. We demonstrate an example of how minimal SeqLib code can extract, error-correct and assemble reads from a CRAM file and then align with BWA-MEM. SeqLib also provides additional capabilities, including chromosome-aware interval queries and read plotting. Command line tools are available for performing integrated error correction, micro-assemblies and alignment. Availability and Implementation: SeqLib is available on Linux and OSX for the C ++98 standard and later at github.com/walaj/SeqLib. SeqLib is released under the Apache2 license. Additional capabilities for BLAT alignment are available under the BLAT license. Contact: jwala@broadinstitue.org; rameen@broadinstitute.org PMID:28011768

  5. jPOSTrepo: an international standard data repository for proteomes

    PubMed Central

    Okuda, Shujiro; Watanabe, Yu; Moriya, Yuki; Kawano, Shin; Yamamoto, Tadashi; Matsumoto, Masaki; Takami, Tomoyo; Kobayashi, Daiki; Araki, Norie; Yoshizawa, Akiyasu C.; Tabata, Tsuyoshi; Sugiyama, Naoyuki; Goto, Susumu; Ishihama, Yasushi

    2017-01-01

    Major advancements have recently been made in mass spectrometry-based proteomics, yielding an increasing number of datasets from various proteomics projects worldwide. In order to facilitate the sharing and reuse of promising datasets, it is important to construct appropriate, high-quality public data repositories. jPOSTrepo (https://repository.jpostdb.org/) has successfully implemented several unique features, including high-speed file uploading, flexible file management and easy-to-use interfaces. This repository has been launched as a public repository containing various proteomic datasets and is available for researchers worldwide. In addition, our repository has joined the ProteomeXchange consortium, which includes the most popular public repositories such as PRIDE in Europe for MS/MS datasets and PASSEL for SRM datasets in the USA. Later MassIVE was introduced in the USA and accepted into the ProteomeXchange, as was our repository in July 2016, providing important datasets from Asia/Oceania. Accordingly, this repository thus contributes to a global alliance to share and store all datasets from a wide variety of proteomics experiments. Thus, the repository is expected to become a major repository, particularly for data collected in the Asia/Oceania region. PMID:27899654

  6. FTOOLS: A FITS Data Processing and Analysis Software Package

    NASA Astrophysics Data System (ADS)

    Blackburn, J. Kent; Greene, Emily A.; Pence, William

    1993-05-01

    FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities common to high energy astrophysics data sets. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.

  7. jqcML: an open-source java API for mass spectrometry quality control data in the qcML format.

    PubMed

    Bittremieux, Wout; Kelchtermans, Pieter; Valkenborg, Dirk; Martens, Lennart; Laukens, Kris

    2014-07-03

    The awareness that systematic quality control is an essential factor to enable the growth of proteomics into a mature analytical discipline has increased over the past few years. To this aim, a controlled vocabulary and document structure have recently been proposed by Walzer et al. to store and disseminate quality-control metrics for mass-spectrometry-based proteomics experiments, called qcML. To facilitate the adoption of this standardized quality control routine, we introduce jqcML, a Java application programming interface (API) for the qcML data format. First, jqcML provides a complete object model to represent qcML data. Second, jqcML provides the ability to read, write, and work in a uniform manner with qcML data from different sources, including the XML-based qcML file format and the relational database qcDB. Interaction with the XML-based file format is obtained through the Java Architecture for XML Binding (JAXB), while generic database functionality is obtained by the Java Persistence API (JPA). jqcML is released as open-source software under the permissive Apache 2.0 license and can be downloaded from https://bitbucket.org/proteinspector/jqcml .

  8. SeqLib: a C ++ API for rapid BAM manipulation, sequence alignment and sequence assembly.

    PubMed

    Wala, Jeremiah; Beroukhim, Rameen

    2017-03-01

    We present SeqLib, a C ++ API and command line tool that provides a rapid and user-friendly interface to BAM/SAM/CRAM files, global sequence alignment operations and sequence assembly. Four C libraries perform core operations in SeqLib: HTSlib for BAM access, BWA-MEM and BLAT for sequence alignment and Fermi for error correction and sequence assembly. Benchmarking indicates that SeqLib has lower CPU and memory requirements than leading C ++ sequence analysis APIs. We demonstrate an example of how minimal SeqLib code can extract, error-correct and assemble reads from a CRAM file and then align with BWA-MEM. SeqLib also provides additional capabilities, including chromosome-aware interval queries and read plotting. Command line tools are available for performing integrated error correction, micro-assemblies and alignment. SeqLib is available on Linux and OSX for the C ++98 standard and later at github.com/walaj/SeqLib. SeqLib is released under the Apache2 license. Additional capabilities for BLAT alignment are available under the BLAT license. jwala@broadinstitue.org ; rameen@broadinstitute.org. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Autoplot and the HAPI Server

    NASA Astrophysics Data System (ADS)

    Faden, J.; Vandegriff, J. D.; Weigel, R. S.

    2016-12-01

    Autoplot was introduced in 2008 as an easy-to-use plotting tool for the space physics community. It reads data from a variety of file resources, such as CDF and HDF files, and a number of specialized data servers, such as the PDS/PPI's DIT-DOS, CDAWeb, and from the University of Iowa's RPWG Das2Server. Each of these servers have optimized methods for transmitting data to display in Autoplot, but require coordination and specialized software to work, limiting Autoplot's ability to access new servers and datasets. Likewise, groups who would like software to access their APIs must either write thier own clients, or publish a specification document in hopes that people will write clients. The HAPI specification was written so that a simple, standard API could be used by both Autoplot and server implementations, to remove these barriers to free flow of time series data. Autoplot's software for communicating with HAPI servers is presented, showing the user interface scientists will use, and how data servers might implement the HAPI specification to provide access to their data. This will also include instructions on how Autoplot is used and installed desktop computers, and used to view data from the RBSP, Juno, and other missions.

  10. Multibody dynamics model building using graphical interfaces

    NASA Technical Reports Server (NTRS)

    Macala, Glenn A.

    1989-01-01

    In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.

  11. A new JAVA interface implementation of THESIAS: testing haplotype effects in association studies.

    PubMed

    Tregouet, D A; Garelle, V

    2007-04-15

    THESIAS (Testing Haplotype EffectS In Association Studies) is a popular software for carrying haplotype association analysis in unrelated individuals. In addition to the command line interface, a graphical JAVA interface is now proposed allowing one to run THESIAS in a user-friendly manner. Besides, new functionalities have been added to THESIAS including the possibility to analyze polychotomous phenotype and X-linked polymorphisms. The software package including documentation and example data files is freely available at http://genecanvas.ecgene.net. The source codes are also available upon request.

  12. A Customizable Importer for the Clinical Data Warehouses PaDaWaN and I2B2.

    PubMed

    Fette, Georg; Kaspar, Mathias; Dietrich, Georg; Ertl, Maximilian; Krebs, Jonathan; Stoerk, Stefan; Puppe, Frank

    2017-01-01

    In recent years, clinical data warehouses (CDW) storing routine patient data have become more and more popular to support scientific work in the medical domain. Although CDW systems provide interfaces to import new data, these interfaces have to be used by processing tools that are often not included in the systems themselves. In order to establish an extraction-transformation-load (ETL) workflow, already existing components have to be taken or new components have to be developed to perform the load part of the ETL. We present a customizable importer for the two CDW systems PaDaWaN and I2B2, which is able to import the most common import formats (plain text, CSV and XML files). In order to be run, the importer only needs a configuration file with the user credentials for the target CDW and a list of XML import configuration files, which determine how already exported data is indented to be imported. The importer is provided as a Java program, which has no further software requirements.

  13. Structural/aerodynamic Blade Analyzer (SAB) User's Guide, Version 1.0

    NASA Technical Reports Server (NTRS)

    Morel, M. R.

    1994-01-01

    The structural/aerodynamic blade (SAB) analyzer provides an automated tool for the static-deflection analysis of turbomachinery blades with aerodynamic and rotational loads. A structural code calculates a deflected blade shape using aerodynamic loads input. An aerodynamic solver computes aerodynamic loads using deflected blade shape input. The two programs are iterated automatically until deflections converge. Currently, SAB version 1.0 is interfaced with MSC/NASTRAN to perform the structural analysis and PROP3D to perform the aerodynamic analysis. This document serves as a guide for the operation of the SAB system with specific emphasis on its use at NASA Lewis Research Center (LeRC). This guide consists of six chapters: an introduction which gives a summary of SAB; SAB's methodology, component files, links, and interfaces; input/output file structure; setup and execution of the SAB files on the Cray computers; hints and tips to advise the user; and an example problem demonstrating the SAB process. In addition, four appendices are presented to define the different computer programs used within the SAB analyzer and describe the required input decks.

  14. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  15. A suite of MATLAB-based computational tools for automated analysis of COPAS Biosort data

    PubMed Central

    Morton, Elizabeth; Lamitina, Todd

    2010-01-01

    Complex Object Parametric Analyzer and Sorter (COPAS) devices are large-object, fluorescence-capable flow cytometers used for high-throughput analysis of live model organisms, including Drosophila melanogaster, Caenorhabditis elegans, and zebrafish. The COPAS is especially useful in C. elegans high-throughput genome-wide RNA interference (RNAi) screens that utilize fluorescent reporters. However, analysis of data from such screens is relatively labor-intensive and time-consuming. Currently, there are no computational tools available to facilitate high-throughput analysis of COPAS data. We used MATLAB to develop algorithms (COPAquant, COPAmulti, and COPAcompare) to analyze different types of COPAS data. COPAquant reads single-sample files, filters and extracts values and value ratios for each file, and then returns a summary of the data. COPAmulti reads 96-well autosampling files generated with the ReFLX adapter, performs sample filtering, graphs features across both wells and plates, performs some common statistical measures for hit identification, and outputs results in graphical formats. COPAcompare performs a correlation analysis between replicate 96-well plates. For many parameters, thresholds may be defined through a simple graphical user interface (GUI), allowing our algorithms to meet a variety of screening applications. In a screen for regulators of stress-inducible GFP expression, COPAquant dramatically accelerated data analysis and allowed us to rapidly move from raw data to hit identification. Because the COPAS file structure is standardized and our MATLAB code is freely available, our algorithms should be extremely useful for analysis of COPAS data from multiple platforms and organisms. The MATLAB code is freely available at our web site (www.med.upenn.edu/lamitinalab/downloads.shtml). PMID:20569218

  16. Space Generic Open Avionics Architecture (SGOAA) standard specification

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of a specific avionics hardware/software system. This standard defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  17. Lab Streaming Layer Enabled Myo Data Collection Software User Manual

    DTIC Science & Technology

    2017-06-07

    time - series data over a local network. LSL handles the networking, time -synchronization, (near-) real- time access as well as, optionally, the... series data collection (e.g., brain activity, heart activity, muscle activity) using the LSL application programming interface (API). Time -synchronized...saved to a single extensible data format (XDF) file. Once the time - series data are collected in a Lab Recorder XDF file, users will be able to query

  18. PSTOOLS - FOUR PROGRAMS THAT INTERPRET/FORMAT POSTSCRIPT FILES

    NASA Technical Reports Server (NTRS)

    Choi, D.

    1994-01-01

    PSTOOLS is a package of four programs that operate on files written in the page description language, PostScript. The programs include a PostScript previewer for the IRIS workstation, a PostScript driver for the Matrix QCRZ film recorder, a PostScript driver for the Tektronix 4693D printer, and a PostScript code beautifier that formats PostScript files to be more legible. The three programs PSIRIS, PSMATRIX, and PSTEK are similar in that they all interpret the PostScript language and output the graphical results to a device, and they support color PostScript images. The common code which is shared by these three programs is included as a library of routines. PSPRETTY formats a PostScript file by appropriately indenting procedures and code delimited by "saves" and "restores." PSTOOLS does not use Adobe fonts. PSTOOLS is written in C-language for implementation on SGI IRIS 4D series workstations running IRIX 3.2 or later. A README file and UNIX man pages provide information regarding the installation and use of the PSTOOLS programs. A six-page manual which provides slightly more detailed information may be purchased separately. The standard distribution medium for this package is one .25 inch streaming magnetic tape cartridge in UNIX tar format. PSIRIS (the largest program) requires 1.2Mb of main memory. PSMATRIX requires the "gpib" board (IEEE 488) available from Silicon Graphics. Inc. The programs with graphical interfaces require that the IRIS have at least 24 bit planes. This package was developed in 1990 and updated in 1991. SGI, IRIS 4D, and IRIX are trademarks of Silicon Graphics, Inc. Matrix QCRZ is a registered trademark of the AGFA Group. Tektronix 4693D is a trademark of Tektronix, Inc. Adobe is a trademark of Adobe Systems Incorporated. PostScript is a registered trademark of Adobe Systems Incorporated. UNIX is a registered trademark of AT&T Bell Laboratories.

  19. UPmag: MATLAB software for viewing and processing u channel or other pass-through paleomagnetic data

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Channell, James E. T.

    2009-10-01

    With the development of pass-through cryogenic magnetometers and the u channel sampling method, large volumes of paleomagnetic data can be accumulated within a short time period. It is often critical to visualize and process these data in "real time" as measurements proceed, so that the measurement plan can be dictated accordingly. We introduce new MATLAB™ software (UPmag) that is designed for easy and rapid analysis of natural remanent magnetization (NRM) and laboratory-induced remanent magnetization data for u channel samples or core sections. UPmag comprises three MATLAB™ graphic user interfaces: UVIEW, UDIR, and UINT. UVIEW allows users to open and check through measurement data from the magnetometer as well as to correct detected flux jumps in the data, and to export files for further treatment. UDIR reads the *.dir file generated by UVIEW, automatically calculates component directions using selectable demagnetization range(s) with anchored or free origin, and displays vector component plots and stepwise intensity plots for any position along the u channel sample. UDIR can also display data on equal area stereographic projections and draw virtual geomagnetic poles on various map projections. UINT provides a convenient platform to evaluate relative paleointensity (RPI) estimates using the *.int files that can be exported from UVIEW. Two methods are used for RPI estimation: the calculated slopes of the best fit line between the NRM and the respective normalizer (using paired demagnetization data for both parameters) and the averages of the NRM/normalizer ratios. Linear correlation coefficients (of slopes) and standard deviations (of ratios) can be calculated simultaneously to monitor the quality of the RPI estimates. All resulting data and plots from UPmag can be exported into various file formats. UPmag software, data format files, and test data can be downloaded from http://earthref.org/cgi-bin/er.cgi?s=erda.cgi?n=985.

  20. Geologic map of the Valjean Hills 7.5' quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Calzia, J.P.; Troxel, Bennie W.; digital database by Raumann, Christian G.

    2003-01-01

    FGDC-compliant metadata for the ARC/INFO coverages. The Correlation of Map Units and Description of Map Units is in the editorial format of USGS Geologic Investigations Series (I-series) maps but has not been edited to comply with I-map standards. Within the geologic map data package, map units are identified by standard geologic map criteria such as formation-name, age, and lithology. Even though this is an Open-File Report and includes the standard USGS Open-File disclaimer, the report closely adheres to the stratigraphic nomenclature of the U.S. Geological Survey. Descriptions of units can be obtained by viewing or plotting the .pdf file (3 above) or plotting the postscript file (2 above).

  1. 76 FR 58061 - Self-Regulatory Organizations; Options Clearing Corporation; Notice of Filing of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-19

    ... Organizations; Options Clearing Corporation; Notice of Filing of Proposed Rule Change To Adopt Fitness Standards... purpose of the proposed rule change is to establish fitness standards for directors, clearing members, and... Core Principle O requires DCOs to establish fitness standards for directors, clearing members and...

  2. 75 FR 23755 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-04

    ... securities filings: Docket Numbers: ES10-35-000. Applicants: American Transmission Company LLC, ATC... Reliability Corporation for Approval of Interpretation to Reliability Standard CIP- 001--Cyber Security... Corporation for Approval of Interpretation to Reliability Standard [[Page 23756

  3. TileDCS web system

    NASA Astrophysics Data System (ADS)

    Maidantchik, C.; Ferreira, F.; Grael, F.; Atlas Tile Calorimeter Community

    2010-04-01

    The web system described here provides features to monitor the ATLAS Detector Control System (DCS) acquired data. The DCS is responsible for overseeing the coherent and safe operation of the ATLAS experiment hardware. In the context of the Hadronic Tile Calorimeter Detector (TileCal), it controls the power supplies of the readout electronics acquiring voltages, currents, temperatures and coolant pressure measurements. The physics data taking requires the stable operation of the power sources. The TileDCS Web System retrieves automatically data and extracts the statistics for given periods of time. The mean and standard deviation outcomes are stored as XML files and are compared to preset thresholds. Further, a graphical representation of the TileCal cylinders indicates the state of the supply system of each detector drawer. Colors are designated for each kind of state. In this way problems are easier to find and the collaboration members can focus on them. The user selects a module and the system presents detailed information. It is possible to verify the statistics and generate charts of the parameters over the time. The TileDCS Web System also presents information about the power supplies latest status. One wedge is colored green whenever the system is on. Otherwise it is colored red. Furthermore, it is possible to perform customized analysis. It provides search interfaces where the user can set the module, parameters, and the time period of interest. The system also produces the output of the retrieved data as charts, XML files, CSV and ROOT files according to the user's choice.

  4. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells.

    PubMed

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.

  5. The NASA John C. Stennis Environmental Geographic Information System

    NASA Technical Reports Server (NTRS)

    Cohan, Tyrus; Grant, Kerry

    2002-01-01

    In addition to the Environmental Geographic Information System (EGIS) presentation, we will present two live demonstrations of a portion of the work being performed in support of environmental operations onsite and NASA-wide. These live demonstrations will showcase the NASA EGIS database through working versions of two software packages available from Environmental Systems Research Institute, Inc. (ESRI, Inc.): ArcIMS 3.0 and either ArcView 3.2a or ArcGIS 8.0.2. Using a standard web browser, the ArcIMS demo will allow users to access a project file containing several data layers found in the EGIS database. ArcIMS is configured so that a single computer can be used as the data server and as the user interface, which allows for maximum Internet security because the computer being used will not actually be connected to the World Wide Web. Further, being independent of the Internet, the demo will run at an increased speed. This demo will include several data layers that are specific to Stennis Space Center. The EGIS database demo is a representative portion of the entire EGIS project sent to NASA Headquarters last year. This demo contains data files that are readily available at various government agency Web sites for download. Although these files contain roads, rails, and other infrastructure details, they are generalized and at a small enough scale that they provide only a general idea of each NASA center's surroundings rather than specific details of the area.

  6. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells

    PubMed Central

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569

  7. DICOM router: an open source toolbox for communication and correction of DICOM objects.

    PubMed

    Hackländer, Thomas; Kleber, Klaus; Martin, Jens; Mertens, Heinrich

    2005-03-01

    Today, the exchange of medical images and clinical information is well defined by the digital imaging and communications in medicine (DICOM) and Health Level Seven (ie, HL7) standards. The interoperability among information systems is specified by the integration profiles of IHE (Integrating the Healthcare Enterprise). However, older imaging modalities frequently do not correctly support these interfaces and integration profiles, and some use cases are not yet specified by IHE. Therefore, corrections of DICOM objects are necessary to establish conformity. The aim of this project was to develop a toolbox that can automatically perform these recurrent corrections of the DICOM objects. The toolbox is composed of three main components: 1) a receiver to receive DICOM objects, 2) a processing pipeline to correct each object, and 3) one or more senders to forward each corrected object to predefined addressees. The toolbox is implemented under Java as an open source project. The processing pipeline is realized by means of plug ins. One of the plug ins can be programmed by the user via an external eXtensible Stylesheet Language (ie, XSL) file. Using this plug in, DICOM objects can also be converted into eXtensible Markup Language (ie, XML) documents or other data formats. DICOM storage services, DICOM CD-ROMs, and the local file system are defined as input and output channel. The toolbox is used clinically for different application areas. These are the automatic correction of DICOM objects from non-IHE-conforming modalities, the import of DICOM CD-ROMs into the picture archiving and communication system and the pseudo naming of DICOM images. The toolbox has been accepted by users in a clinical setting. Because of the open programming interfaces, the functionality can easily be adapted to future applications.

  8. Automatic publishing ISO 19115 metadata with PanMetaDocs using SensorML information

    NASA Astrophysics Data System (ADS)

    Stender, Vivien; Ulbricht, Damian; Schroeder, Matthias; Klump, Jens

    2014-05-01

    Terrestrial Environmental Observatories (TERENO) is an interdisciplinary and long-term research project spanning an Earth observation network across Germany. It includes four test sites within Germany from the North German lowlands to the Bavarian Alps and is operated by six research centers of the Helmholtz Association. The contribution by the participating research centers is organized as regional observatories. A challenge for TERENO and its observatories is to integrate all aspects of data management, data workflows, data modeling and visualizations into the design of a monitoring infrastructure. TERENO Northeast is one of the sub-observatories of TERENO and is operated by the German Research Centre for Geosciences (GFZ) in Potsdam. This observatory investigates geoecological processes in the northeastern lowland of Germany by collecting large amounts of environmentally relevant data. The success of long-term projects like TERENO depends on well-organized data management, data exchange between the partners involved and on the availability of the captured data. Data discovery and dissemination are facilitated not only through data portals of the regional TERENO observatories but also through a common spatial data infrastructure TEODOOR (TEreno Online Data repOsitORry). TEODOOR bundles the data, provided by the different web services of the single observatories, and provides tools for data discovery, visualization and data access. The TERENO Northeast data infrastructure integrates data from more than 200 instruments and makes data available through standard web services. Geographic sensor information and services are described using the ISO 19115 metadata schema. TEODOOR accesses the OGC Sensor Web Enablement (SWE) interfaces offered by the regional observatories. In addition to the SWE interface, TERENO Northeast also published data through DataCite. The necessary metadata are created in an automated process by extracting information from the SWE SensorML to create ISO 19115 compliant metadata. The resulting metadata file is stored in the GFZ Potsdam data infrastructure. The publishing workflow for file based research datasets at GFZ Potsdam is based on the eSciDoc infrastructure, using PanMetaDocs (PMD) as the graphical user interface. PMD is a collaborative, metadata based data and information exchange platform [1]. Besides SWE, metadata are also syndicated by PMD through an OAI-PMH interface. In addition, metadata from other observatories, projects or sensors in TERENO can be accessed through the TERENO Northeast data portal. [1] http://meetingorganizer.copernicus.org/EGU2012/EGU2012-7058-2.pdf

  9. Multi-User Space Link Extension (SLE) System

    NASA Technical Reports Server (NTRS)

    Perkins, Toby

    2013-01-01

    The Multi-User Space (MUS) Link Extension system, a software and data system, provides Space Link Extension (SLE) users with three space data transfer services in timely, complete, and offline modes as applicable according to standards defined by the Consultative Committee for Space Data Systems (CCSDS). MUS radically reduces the schedule, cost, and risk of implementing a new SLE user system, minimizes operating costs with a lights-out approach to SLE, and is designed to require no sustaining engineering expense during its lifetime unless changes in the CCSDS SLE standards, combined with new provider implementations, force changes. No software modification to MUS needs to be made to support a new mission. Any systems engineer with Linux experience can begin testing SLE user service instances with MUS starting from a personal computer (PC) within five days. For flight operators, MUS provides a familiar-looking Web page for entering SLE configuration data received from SLE. Operators can also use the Web page to back up a space mission's entire set of up to approximately 500 SLE service instances in less than five seconds, or to restore or transfer from another system the same amount of data from a MUS backup file in about the same amount of time. Missions operate each MUS SLE service instance independently by sending it MUS directives, which are legible, plain ASCII strings. MUS directives are usually (but not necessarily) sent through a TCP-IP (Transmission Control Protocol Internet Protocol) socket from a MOC (Mission Operations Center) or POCC (Payload Operations Control Center) system, under scripted control, during "lights-out" spacecraft operation. MUS permits the flight operations team to configure independently each of its data interfaces; not only commands and telemetry, but also MUS status messages to the MOC. Interfaces can use single- or multiple-client TCP/IP server sockets, TCP/IP client sockets, temporary disk files, the system log, or standard in, standard out, or standard error as applicable. By defining MUS templates in ASCII, the flight operations team can include any MUS system variable in telemetry or command headers or footers, and/or in status messages. Data fields can be arranged within messages in different sequences, according to the mission s needs. The only constraints imposed are on the format of MUS directive strings, and some bare minimum logical requirements that must be met in order for MUS to read the mission control center's spacecraft command inputs. The MUS system imposes no limits or constraints on the numbers and combinations of missions and SLE service instances that it will support simultaneously. At any time, flight operators may add, change, delete, bind, connect, or disconnect.

  10. Image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsh, Amber; Harsch, Tim; Pitt, Julie

    2007-08-31

    The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.

  11. PRince: a web server for structural and physicochemical analysis of protein-RNA interface.

    PubMed

    Barik, Amita; Mishra, Abhishek; Bahadur, Ranjit Prasad

    2012-07-01

    We have developed a web server, PRince, which analyzes the structural features and physicochemical properties of the protein-RNA interface. Users need to submit a PDB file containing the atomic coordinates of both the protein and the RNA molecules in complex form (in '.pdb' format). They should also mention the chain identifiers of interacting protein and RNA molecules. The size of the protein-RNA interface is estimated by measuring the solvent accessible surface area buried in contact. For a given protein-RNA complex, PRince calculates structural, physicochemical and hydration properties of the interacting surfaces. All these parameters generated by the server are presented in a tabular format. The interacting surfaces can also be visualized with software plug-in like Jmol. In addition, the output files containing the list of the atomic coordinates of the interacting protein, RNA and interface water molecules can be downloaded. The parameters generated by PRince are novel, and users can correlate them with the experimentally determined biophysical and biochemical parameters for better understanding the specificity of the protein-RNA recognition process. This server will be continuously upgraded to include more parameters. PRince is publicly accessible and free for use. Available at http://www.facweb.iitkgp.ernet.in/~rbahadur/prince/home.html.

  12. An XML-based Generic Tool for Information Retrieval in Solar Databases

    NASA Astrophysics Data System (ADS)

    Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain

    This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.

  13. A Python Script to Compute Isochrones for MODFLOW.

    PubMed

    Feo, Alessandra; Zanini, Andrea; Petrella, Emma; Celico, Fulvio

    2018-03-01

    MODFLOW constitutes today the most popular modeling tool in the study of water flow in aquifers and in modeling aquifers. To simplify the interface to MODFLOW various GUI have been developed for the creation of model definition files and for the visualization and interpretation of results. Recently Bakker et al. (2016) developed the FloPy interface to MODFLOW that allows to import and use the produced simulation data using Python. This allows to construct model input files, run the models, read and plot simulations results through Python scripts. In this note, we present a Python program (that uses FloPy) interface that allows us to generate time-related capture zones (isochrones) for confined 2D steady-state groundwater flow in unbounded domains, with one or more wells. As an application, we show a validation of the approach and the results of four basic test cases: a homogenous aquifer with one well, a heterogeneous aquifer with one well, an aquifer with four wells located both longitudinal and perpendicular to the flow direction. © 2017, National Ground Water Association.

  14. Use of On- Board File System: A Real Simplification for the Operators?

    NASA Astrophysics Data System (ADS)

    Olive, X.; Garcia, G.; Alison, B.; Charmeau, M. C.

    2008-08-01

    On-board file system allows to control and operate a spacecraft in new way offering more possibilities. It should permit to provide to the Operator a more abstract data view of the spacecraft, letting them focus on the functional part of their work and not on the exchange mechanism between Ground and Board. Files are usually used in the recent space project but in a restricted way limiting their capabilities. In this paper we describe what we consider as being a file system and its usage on 2 examples among those studied : OBCP and patch. We discuss how files can be handled with the PUS standard and give in the last section some perspectives such as the use of files to standardize all the exchange between Ground / Board and Board / Board.

  15. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    NASA Technical Reports Server (NTRS)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation.

  16. Standard interface: Twin-coaxial converter

    NASA Technical Reports Server (NTRS)

    Lushbaugh, W. A.

    1976-01-01

    The network operations control center standard interface has been adopted as a standard computer interface for all future minicomputer based subsystem development for the Deep Space Network. Discussed is an intercomputer communications link using a pair of coaxial cables. This unit is capable of transmitting and receiving digital information at distances up to 600 m with complete ground isolation between the communicating devices. A converter is described that allows a computer equipped with the standard interface to use the twin coaxial link.

  17. Is There a Chance for a Standardised User Interface?

    ERIC Educational Resources Information Center

    Fletcher, Liz

    1993-01-01

    Issues concerning the implementation of standard user interfaces for CD-ROMs are discussed, including differing perceptions of the ideal interface, graphical user interfaces, user needs, and the standard protocols. It is suggested users should be able to select from a variety of user interfaces on each CD-ROM. (EA)

  18. Recent developments in user-job management with Ganga

    NASA Astrophysics Data System (ADS)

    Currie, R.; Elmsheuser, J.; Fay, R.; Owen, P. H.; Richards, A.; Slater, M.; Sutcliffe, W.; Williams, M.

    2015-12-01

    The Ganga project was originally developed for use by LHC experiments and has been used extensively throughout Run1 in both LHCb and ATLAS. This document describes some the most recent developments within the Ganga project. There have been improvements in the handling of large scale computational tasks in the form of a new GangaTasks infrastructure. Improvements in file handling through using a new IGangaFile interface makes handling files largely transparent to the end user. In addition to this the performance and usability of Ganga have both been addressed through the development of a new queues system allows for parallel processing of job related tasks.

  19. Thermo-msf-parser: an open source Java library to parse and visualize Thermo Proteome Discoverer msf files.

    PubMed

    Colaert, Niklaas; Barsnes, Harald; Vaudel, Marc; Helsens, Kenny; Timmerman, Evy; Sickmann, Albert; Gevaert, Kris; Martens, Lennart

    2011-08-05

    The Thermo Proteome Discoverer program integrates both peptide identification and quantification into a single workflow for peptide-centric proteomics. Furthermore, its close integration with Thermo mass spectrometers has made it increasingly popular in the field. Here, we present a Java library to parse the msf files that constitute the output of Proteome Discoverer. The parser is also implemented as a graphical user interface allowing convenient access to the information found in the msf files, and in Rover, a program to analyze and validate quantitative proteomics information. All code, binaries, and documentation is freely available at http://thermo-msf-parser.googlecode.com.

  20. LabKey Server NAb: A tool for analyzing, visualizing and sharing results from neutralizing antibody assays

    PubMed Central

    2011-01-01

    Background Multiple types of assays allow sensitive detection of virus-specific neutralizing antibodies. For example, the extent of antibody neutralization of HIV-1, SIV and SHIV can be measured in the TZM-bl cell line through the degree of luciferase reporter gene expression after infection. In the past, neutralization curves and titers for this standard assay have been calculated using an Excel macro. Updating all instances of such a macro with new techniques can be unwieldy and introduce non-uniformity across multi-lab teams. Using Excel also poses challenges in centrally storing, sharing and associating raw data files and results. Results We present LabKey Server's NAb tool for organizing, analyzing and securely sharing data, files and results for neutralizing antibody (NAb) assays, including the luciferase-based TZM-bl NAb assay. The customizable tool supports high-throughput experiments and includes a graphical plate template designer, allowing researchers to quickly adapt calculations to new plate layouts. The tool calculates the percent neutralization for each serum dilution based on luminescence measurements, fits a range of neutralization curves to titration results and uses these curves to estimate the neutralizing antibody titers for benchmark dilutions. Results, curve visualizations and raw data files are stored in a database and shared through a secure, web-based interface. NAb results can be integrated with other data sources based on sample identifiers. It is simple to make results public after publication by updating folder security settings. Conclusions Standardized tools for analyzing, archiving and sharing assay results can improve the reproducibility, comparability and reliability of results obtained across many labs. LabKey Server and its NAb tool are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. Many members of the HIV research community can also access the LabKey Server NAb tool without installing the software by using the Atlas Science Portal (https://atlas.scharp.org). Atlas is an installation of LabKey Server. PMID:21619655

  1. Architecture for the Interdisciplinary Earth Data Alliance

    NASA Astrophysics Data System (ADS)

    Richard, S. M.

    2016-12-01

    The Interdisciplinary Earth Data Alliance (IEDA) is leading an EarthCube (EC) Integrative Activity to develop a governance structure and technology framework that enables partner data systems to share technology, infrastructure, and practice for documenting, curating, and accessing heterogeneous geoscience data. The IEDA data facility provides capabilities in an extensible framework that enables domain-specific requirements for each partner system in the Alliance to be integrated into standardized cross-domain workflows. The shared technology infrastructure includes a data submission hub, a domain-agnostic file-based repository, an integrated Alliance catalog and a Data Browser for data discovery across all partner holdings, as well as services for registering identifiers for datasets (DOI) and samples (IGSN). The submission hub will be a platform that facilitates acquisition of cross-domain resource documentation and channels users into domain and resource-specific workflows tailored for each partner community. We are exploring an event-based message bus architecture with a standardized plug-in interface for adding capabilities. This architecture builds on the EC CINERGI metadata pipeline as well as the message-based architecture of the SEAD project. Plug-in components for file introspection to match entities to a data type registry (extending EC Digital Crust and Research Data Alliance work), extract standardized keywords (using CINERGI components), location, cruise, personnel and other metadata linkage information (building on GeoLink and existing IEDA partner components). The submission hub will feed submissions to appropriate partner repositories and service endpoints targeted by domain and resource type for distribution. The Alliance governance will adopt patterns (vocabularies, operations, resource types) for self-describing data services using standard HTTP protocol for simplified data access (building on EC GeoWS and other `RESTful' approaches). Exposure of resource descriptions (datasets and service distributions) for harvesting by commercial search engines as well as geoscience-data focused crawlers (like EC B-Cube crawler) will increase discoverability of IEDA resources with minimal effort by curators.

  2. OOSTethys - Open Source Software for the Global Earth Observing Systems of Systems

    NASA Astrophysics Data System (ADS)

    Bridger, E.; Bermudez, L. E.; Maskey, M.; Rueda, C.; Babin, B. L.; Blair, R.

    2009-12-01

    An open source software project is much more than just picking the right license, hosting modular code and providing effective documentation. Success in advancing in an open collaborative way requires that the process match the expected code functionality to the developer's personal expertise and organizational needs as well as having an enthusiastic and responsive core lead group. We will present the lessons learned fromOOSTethys , which is a community of software developers and marine scientists who develop open source tools, in multiple languages, to integrate ocean observing systems into an Integrated Ocean Observing System (IOOS). OOSTethys' goal is to dramatically reduce the time it takes to install, adopt and update standards-compliant web services. OOSTethys has developed servers, clients and a registry. Open source PERL, PYTHON, JAVA and ASP tool kits and reference implementations are helping the marine community publish near real-time observation data in interoperable standard formats. In some cases publishing an OpenGeospatial Consortium (OGC), Sensor Observation Service (SOS) from NetCDF files or a database or even CSV text files could take only minutes depending on the skills of the developer. OOSTethys is also developing an OGC standard registry, Catalog Service for Web (CSW). This open source CSW registry was implemented to easily register and discover SOSs using ISO 19139 service metadata. A web interface layer over the CSW registry simplifies the registration process by harvesting metadata describing the observations and sensors from the “GetCapabilities” response of SOS. OPENIOOS is the web client, developed in PERL to visualize the sensors in the SOS services. While the number of OOSTethys software developers is small, currently about 10 around the world, the number of OOSTethys toolkit implementers is larger and growing and the ease of use has played a large role in spreading the use of interoperable standards compliant web services widely in the marine community.

  3. eF-seek: prediction of the functional sites of proteins by searching for similar electrostatic potential and molecular surface shape.

    PubMed

    Kinoshita, Kengo; Murakami, Yoichi; Nakamura, Haruki

    2007-07-01

    We have developed a method to predict ligand-binding sites in a new protein structure by searching for similar binding sites in the Protein Data Bank (PDB). The similarities are measured according to the shapes of the molecular surfaces and their electrostatic potentials. A new web server, eF-seek, provides an interface to our search method. It simply requires a coordinate file in the PDB format, and generates a prediction result as a virtual complex structure, with the putative ligands in a PDB format file as the output. In addition, the predicted interacting interface is displayed to facilitate the examination of the virtual complex structure on our own applet viewer with the web browser (URL: http://eF-site.hgc.jp/eF-seek).

  4. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Feyock, Stefan; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    The purpose of this research effort is to investigate the benefits that might be derived from applying artificial intelligence tools in the area of conceptual design. Therefore, the emphasis is on the artificial intelligence aspects of conceptual design rather than structural and optimization aspects. A prototype knowledge-based system, called STRUTEX, was developed to initially configure a structure to support point loads in two dimensions. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user by integrating a knowledge base interface and inference engine, a data base interface, and graphics while keeping the knowledge base and data base files separate. The system writes a file which can be input into a structural synthesis system, which combines structural analysis and optimization.

  5. DockoMatic 2.0: high throughput inverse virtual screening and homology modeling.

    PubMed

    Bullock, Casey; Cornia, Nic; Jacob, Reed; Remm, Andrew; Peavey, Thomas; Weekes, Ken; Mallory, Chris; Oxford, Julia T; McDougal, Owen M; Andersen, Timothy L

    2013-08-26

    DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly graphical user interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to (1) conduct high throughput inverse virtual screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELER programs and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education.

  6. 16 CFR Appendix D to Part 698 - Standardized form for requesting annual file disclosures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Standardized form for requesting annual file disclosures. D Appendix D to Part 698 Commercial Practices FEDERAL TRADE COMMISSION THE FAIR CREDIT REPORTING ACT MODEL FORMS AND DISCLOSURES Pt. 698, App. D Appendix D to Part 698—Standardized form for...

  7. The Scalable Checkpoint/Restart Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  8. Report on IVS-WG4

    NASA Astrophysics Data System (ADS)

    Gipson, John

    2011-07-01

    I describe the proposed data structure for storing, archiving and processing VLBI data. In this scheme, most VLBI data is stored in NetCDF files. NetCDF has the advantage that there are interfaces to most common computer languages including Fortran, Fortran-90, C, C++, Perl, etc, and the most common operating systems including linux, Windows and Mac. The data files for a particular session are organized by special ASCII "wrapper" files which contain pointers to the data files. This allows great flexibility in the processing and analysis of VLBI data, and also allows for extending the types of data used, e.g., source maps. I discuss the use of the new format in calc/solve and other VLBI analysis packages. I also discuss plans for transitioning to the new structure.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    SmartImport.py is a Python source-code file that implements a replacement for the standard Python module importer. The code is derived from knee.py, a file in the standard Python diestribution , and adds functionality to improve the performance of Python module imports in massively parallel contexts.

  10. AITRAC: Augmented Interactive Transient Radiation Analysis by Computer. User's information manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-10-01

    AITRAC is a program designed for on-line, interactive, DC, and transient analysis of electronic circuits. The program solves linear and nonlinear simultaneous equations which characterize the mathematical models used to predict circuit response. The program features 100 external node--200 branch capability; conversional, free-format input language; built-in junction, FET, MOS, and switch models; sparse matrix algorithm with extended-precision H matrix and T vector calculations, for fast and accurate execution; linear transconductances: beta, GM, MU, ZM; accurate and fast radiation effects analysis; special interface for user-defined equations; selective control of multiple outputs; graphical outputs in wide and narrow formats; and on-line parametermore » modification capability. The user describes the problem by entering the circuit topology and part parameters. The program then automatically generates and solves the circuit equations, providing the user with printed or plotted output. The circuit topology and/or part values may then be changed by the user, and a new analysis, requested. Circuit descriptions may be saved on disk files for storage and later use. The program contains built-in standard models for resistors, voltage and current sources, capacitors, inductors including mutual couplings, switches, junction diodes and transistors, FETS, and MOS devices. Nonstandard models may be constructed from standard models or by using the special equations interface. Time functions may be described by straight-line segments or by sine, damped sine, and exponential functions. 42 figures, 1 table. (RWR)« less

  11. Knob manager (KM) operators guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1993-10-08

    KM, Knob Manager, is a tool which enables the user to use the SUNDIALS knob box to adjust the settings of the control system. The followings are some features of KM: dynamic knob assignments with the user friendly interface; user-defined gain for individual knob; graphical displays for operating range and status of each process variable is assigned; backup and restore one or multiple process variable; save current settings to a file and recall the settings from that file in future.

  12. Data General Corporation Advanced Operating System/Virtual Storage (AOS/ VS). Revision 7.60

    DTIC Science & Technology

    1989-02-22

    control list for each directory and data file. An access control list includes the users who can and cannot access files as well as the access...and any required data, it can -5- February 22, 1989 Final Evaluation Report Data General AOS/VS SYSTEM OVERVIEW operate asynchronously and in parallel...memory. The IOC can perform the data transfer without further interventiin from the CPU. The I/O channels interface with the processor or system

  13. The Interface Theory of Perception.

    PubMed

    Hoffman, Donald D; Singh, Manish; Prakash, Chetan

    2015-12-01

    Perception is a product of evolution. Our perceptual systems, like our limbs and livers, have been shaped by natural selection. The effects of selection on perception can be studied using evolutionary games and genetic algorithms. To this end, we define and classify perceptual strategies and allow them to compete in evolutionary games in a variety of worlds with a variety of fitness functions. We find that veridical perceptions--strategies tuned to the true structure of the world--are routinely dominated by nonveridical strategies tuned to fitness. Veridical perceptions escape extinction only if fitness varies monotonically with truth. Thus, a perceptual strategy favored by selection is best thought of not as a window on truth but as akin to a windows interface of a PC. Just as the color and shape of an icon for a text file do not entail that the text file itself has a color or shape, so also our perceptions of space-time and objects do not entail (by the Invention of Space-Time Theorem) that objective reality has the structure of space-time and objects. An interface serves to guide useful actions, not to resemble truth. Indeed, an interface hides the truth; for someone editing a paper or photo, seeing transistors and firmware is an irrelevant hindrance. For the perceptions of H. sapiens, space-time is the desktop and physical objects are the icons. Our perceptions of space-time and objects have been shaped by natural selection to hide the truth and guide adaptive behaviors. Perception is an adaptive interface.

  14. The Chandra Source Catalog: Storage and Interfaces

    NASA Astrophysics Data System (ADS)

    van Stone, David; Harbo, Peter N.; Tibbetts, Michael S.; Zografou, Panagoula; Evans, Ian N.; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is part of the Chandra Data Archive (CDA) at the Chandra X-ray Center. The catalog contains source properties and associated data objects such as images, spectra, and lightcurves. The source properties are stored in relational databases and the data objects are stored in files with their metadata stored in databases. The CDA supports different versions of the catalog: multiple fixed release versions and a live database version. There are several interfaces to the catalog: CSCview, a graphical interface for building and submitting queries and for retrieving data objects; a command-line interface for property and source searches using ADQL; and VO-compliant services discoverable though the VO registry. This poster describes the structure of the catalog and provides an overview of the interfaces.

  15. Rapid Diagnostics of Onboard Sequences

    NASA Technical Reports Server (NTRS)

    Starbird, Thomas W.; Morris, John R.; Shams, Khawaja S.; Maimone, Mark W.

    2012-01-01

    Keeping track of sequences onboard a spacecraft is challenging. When reviewing Event Verification Records (EVRs) of sequence executions on the Mars Exploration Rover (MER), operators often found themselves wondering which version of a named sequence the EVR corresponded to. The lack of this information drastically impacts the operators diagnostic capabilities as well as their situational awareness with respect to the commands the spacecraft has executed, since the EVRs do not provide argument values or explanatory comments. Having this information immediately available can be instrumental in diagnosing critical events and can significantly enhance the overall safety of the spacecraft. This software provides auditing capability that can eliminate that uncertainty while diagnosing critical conditions. Furthermore, the Restful interface provides a simple way for sequencing tools to automatically retrieve binary compiled sequence SCMFs (Space Command Message Files) on demand. It also enables developers to change the underlying database, while maintaining the same interface to the existing applications. The logging capabilities are also beneficial to operators when they are trying to recall how they solved a similar problem many days ago: this software enables automatic recovery of SCMF and RML (Robot Markup Language) sequence files directly from the command EVRs, eliminating the need for people to find and validate the corresponding sequences. To address the lack of auditing capability for sequences onboard a spacecraft during earlier missions, extensive logging support was added on the Mars Science Laboratory (MSL) sequencing server. This server is responsible for generating all MSL binary SCMFs from RML input sequences. The sequencing server logs every SCMF it generates into a MySQL database, as well as the high-level RML file and dictionary name inputs used to create the SCMF. The SCMF is then indexed by a hash value that is automatically included in all command EVRs by the onboard flight software. Second, both the binary SCMF result and the RML input file can be retrieved simply by specifying the hash to a Restful web interface. This interface enables command line tools as well as large sophisticated programs to download the SCMF and RMLs on-demand from the database, enabling a vast array of tools to be built on top of it. One such command line tool can retrieve and display RML files, or annotate a list of EVRs by interleaving them with the original sequence commands. This software has been integrated with the MSL sequencing pipeline where it will serve sequences useful in diagnostics, debugging, and situational awareness throughout the mission.

  16. BFEE: A User-Friendly Graphical Interface Facilitating Absolute Binding Free-Energy Calculations.

    PubMed

    Fu, Haohao; Gumbart, James C; Chen, Haochuan; Shao, Xueguang; Cai, Wensheng; Chipot, Christophe

    2018-03-26

    Quantifying protein-ligand binding has attracted the attention of both theorists and experimentalists for decades. Many methods for estimating binding free energies in silico have been reported in recent years. Proper use of the proposed strategies requires, however, adequate knowledge of the protein-ligand complex, the mathematical background for deriving the underlying theory, and time for setting up the simulations, bookkeeping, and postprocessing. Here, to minimize human intervention, we propose a toolkit aimed at facilitating the accurate estimation of standard binding free energies using a geometrical route, coined the binding free-energy estimator (BFEE), and introduced it as a plug-in of the popular visualization program VMD. Benefitting from recent developments in new collective variables, BFEE can be used to generate the simulation input files, based solely on the structure of the complex. Once the simulations are completed, BFEE can also be utilized to perform the post-treatment of the free-energy calculations, allowing the absolute binding free energy to be estimated directly from the one-dimensional potentials of mean force in simulation outputs. The minimal amount of human intervention required during the whole process combined with the ergonomic graphical interface makes BFEE a very effective and practical tool for the end-user.

  17. Managing hydrological measurements for small and intermediate projects: RObsDat

    NASA Astrophysics Data System (ADS)

    Reusser, Dominik E.

    2014-05-01

    Hydrological measurements need good management for the data not to be lost. Multiple, often overlapping files from various loggers with heterogeneous formats need to be merged. Data needs to be validated and cleaned and subsequently converted to the format for the hydrological target application. Preferably, all these steps should be easily tracable. RObsDat is an R package designed to support such data management. It comes with a command line user interface to support hydrologists to enter and adjust their data in a database following the Observations Data Model (ODM) standard by QUASHI. RObsDat helps in the setup of the database within one of the free database engines MySQL, PostgreSQL or SQLite. It imports the controlled water vocabulary from the QUASHI web service and provides a smart interface between the hydrologist and the database: Already existing data entries are detected and duplicates avoided. The data import function converts different data table designes to make import simple. Cleaning and modifications of data are handled with a simple version control system. Variable and location names are treated in a user friendly way, accepting and processing multiple versions. A new development is the use of spacetime objects for subsequent processing.

  18. Space Generic Open Avionics Architecture (SGOAA) standard specification

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1994-01-01

    This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  19. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  20. 48 CFR 9901.316 - Files and records.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Files and records. 9901.316 Section 9901.316 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF... the minutes of Board proceedings and public hearings. (b) Cost accounting standards promulgated...

  1. Traffic Generator (TrafficGen) Version 1.4.2: Users Guide

    DTIC Science & Technology

    2016-06-01

    events, the user has to enter them manually . We will research and implement a way to better define and organize the multicast addresses so they can be...the network with Transmission Control Protocol and User Datagram Protocol Internet Protocol traffic. Each node generating network traffic in an...TrafficGen Graphical User Interface (GUI) 3 3.1 Anatomy of the User Interface 3 3.2 Scenario Configuration and MGEN Files 4 4. Working with

  2. Development of the geometry database for the CBM experiment

    NASA Astrophysics Data System (ADS)

    Akishina, E. P.; Alexandrov, E. I.; Alexandrov, I. N.; Filozova, I. A.; Friese, V.; Ivanov, V. V.

    2018-01-01

    The paper describes the current state of the Geometry Database (Geometry DB) for the CBM experiment. The main purpose of this database is to provide convenient tools for: (1) managing the geometry modules; (2) assembling various versions of the CBM setup as a combination of geometry modules and additional files. The CBM users of the Geometry DB may use both GUI (Graphical User Interface) and API (Application Programming Interface) tools for working with it.

  3. Workflow opportunities using JPEG 2000

    NASA Astrophysics Data System (ADS)

    Foshee, Scott

    2002-11-01

    JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.

  4. MESAFace, a graphical interface to analyze the MESA output

    NASA Astrophysics Data System (ADS)

    Giannotti, M.; Wise, M.; Mohammed, A.

    2013-04-01

    MESA (Modules for Experiments in Stellar Astrophysics) has become very popular among astrophysicists as a powerful and reliable code to simulate stellar evolution. Analyzing the output data thoroughly may, however, present some challenges and be rather time-consuming. Here we describe MESAFace, a graphical and dynamical interface which provides an intuitive, efficient and quick way to analyze the MESA output. Catalogue identifier: AEOQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOQ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19165 No. of bytes in distributed program, including test data, etc.: 6300592 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer capable of running Mathematica. Operating system: Any capable of running Mathematica. Tested on Linux, Mac, Windows XP, Windows 7. RAM: Recommended 2 Gigabytes or more. Supplementary material: Additional test data files are available. Classification: 1.7, 14. Nature of problem: Find a way to quickly and thoroughly analyze the output of a MESA run, including all the profiles, and have an efficient method to produce graphical representations of the data. Solution method: We created two scripts (to be run consecutively). The first one downloads all the data from a MESA run and organizes the profiles in order of age. All the files are saved as tables or arrays of tables which can then be accessed very quickly by Mathematica. The second script uses the Manipulate function to create a graphical interface which allows the user to choose what to plot from a set of menus and buttons. The information shown is updated in real time. The user can access very quickly all the data from the run under examination and visualize it with plots and tables. Unusual features: Moving the slides in certain regions may cause an error message. This happens when Mathematica is asked to read nonexistent data. The error message, however, disappears when the slides are moved back. This issue does not preclude the good functioning of the interface. Additional comments: The program uses the dynamical capabilities of Mathematica. When the program is opened, Mathematica prompts the user to “Enable Dynamics”. It is necessary to accept before proceeding. Running time: Depends on the size of the data downloaded, on where the data are stored (hard-drive or web), and on the speed of the computer or network connection. In general, downloading the data may take from a minute to several minutes. Loading directly from the web is slower. For example, downloading a 200 MB data folder (a total of 102 files) with a dual-core Intel laptop, P8700, 2 GB of RAM, at 2.53 GHz took about a minute from the hard-drive and about 23 min from the web (with a basic home wireless connection).

  5. 75 FR 38026 - Medicare Program; Identification of Backward Compatible Version of Adopted Standard for E...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-01

    ... Programs (NCPDP) Prescriber/ Pharmacist Interface SCRIPT standard, Implementation Guide, Version 10... Prescriber/Pharmacist Interface SCRIPT standard, Version 8, Release 1 and its equivalent NCPDP Prescriber/Pharmacist Interface SCRIPT Implementation Guide, Version 8, Release 1 (hereinafter referred to as the...

  6. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study.

    PubMed

    Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek

    2018-04-26

    Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.

  7. 76 FR 56748 - Combined Notice of Filings #2

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-14

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 2 Take notice...)(2)(iii: Amend Standard LGIA Desert Sunlight Project to be effective 9/7/2011. Filed Date: 09/06/2011...: Wabash Valley Power Association, Inc. submits tariff filing per 35.13(a)(2)(iii: Amendment to add MISO...

  8. 20 CFR 645.270 - What procedures are there to ensure that currently employed workers may file grievances regarding...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... currently employed workers may file grievances regarding displacement and that Welfare-to-Work participants in employment activities may file grievances regarding displacement, health and safety standards and... regarding displacement and that Welfare-to-Work participants in employment activities may file grievances...

  9. 20 CFR 645.270 - What procedures are there to ensure that currently employed workers may file grievances regarding...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... currently employed workers may file grievances regarding displacement and that Welfare-to-Work participants in employment activities may file grievances regarding displacement, health and safety standards and... regarding displacement and that Welfare-to-Work participants in employment activities may file grievances...

  10. 20 CFR 645.270 - What procedures are there to ensure that currently employed workers may file grievances regarding...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... currently employed workers may file grievances regarding displacement and that Welfare-to-Work participants in employment activities may file grievances regarding displacement, health and safety standards and... regarding displacement and that Welfare-to-Work participants in employment activities may file grievances...

  11. 20 CFR 645.270 - What procedures are there to ensure that currently employed workers may file grievances regarding...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... currently employed workers may file grievances regarding displacement and that Welfare-to-Work participants in employment activities may file grievances regarding displacement, health and safety standards and... regarding displacement and that Welfare-to-Work participants in employment activities may file grievances...

  12. 20 CFR 645.270 - What procedures are there to ensure that currently employed workers may file grievances regarding...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... currently employed workers may file grievances regarding displacement and that Welfare-to-Work participants in employment activities may file grievances regarding displacement, health and safety standards and... regarding displacement and that Welfare-to-Work participants in employment activities may file grievances...

  13. 29 CFR 409.4 - Personal responsibility for filing of reports.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 2 2011-07-01 2011-07-01 false Personal responsibility for filing of reports. 409.4... LABOR LABOR-MANAGEMENT STANDARDS REPORTS BY SURETY COMPANIES § 409.4 Personal responsibility for filing of reports. Each individual required to file a report under section 211 of the Labor-Management...

  14. 29 CFR 409.4 - Personal responsibility for filing of reports.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 2 2010-07-01 2010-07-01 false Personal responsibility for filing of reports. 409.4... LABOR LABOR-MANAGEMENT STANDARDS REPORTS BY SURETY COMPANIES § 409.4 Personal responsibility for filing of reports. Each individual required to file a report under section 211 of the Labor-Management...

  15. Flexible Architecture for FPGAs in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Clark, Duane I.; Lim, Chester N.

    2012-01-01

    Commonly, field-programmable gate arrays (FPGAs) being developed in cPCI embedded systems include the bus interface in the FPGA. This complicates the development because the interface is complicated and requires a lot of development time and FPGA resources. In addition, flight qualification requires a substantial amount of time be devoted to just this interface. Another complication of putting the cPCI interface into the FPGA being developed is that configuration information loaded into the device by the cPCI microprocessor is lost when a new bit file is loaded, requiring cumbersome operations to return the system to an operational state. Finally, SRAM-based FPGAs are typically programmed via specialized cables and software, with programming files being loaded either directly into the FPGA, or into PROM devices. This can be cumbersome when doing FPGA development in an embedded environment, and does not have an easy path to flight. Currently, FPGAs used in space applications are usually programmed via multiple space-qualified PROM devices that are physically large and require extra circuitry (typically including a separate one-time programmable FPGA) to enable them to be used for this application. This technology adds a cPCI interface device with a simple, flexible, high-performance backend interface supporting multiple backend FPGAs. It includes a mechanism for programming the FPGAs directly via the microprocessor in the embedded system, eliminating specialized hardware, software, and PROM devices and their associated circuitry. It has a direct path to flight, and no extra hardware and minimal software are required to support reprogramming in flight. The device added is currently a small FPGA, but an advantage of this technology is that the design of the device does not change, regardless of the application in which it is being used. This means that it needs to be qualified for flight only once, and is suitable for one-time programmable devices or an application specific integrated circuit (ASIC). An application programming interface (API) further reduces the development time needed to use the interface device in a system.

  16. A digital repository with an extensible data model for biobanking and genomic analysis management.

    PubMed

    Izzo, Massimiliano; Mortola, Francesco; Arnulfo, Gabriele; Fato, Marco M; Varesio, Luigi

    2014-01-01

    Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid.

  17. A digital repository with an extensible data model for biobanking and genomic analysis management

    PubMed Central

    2014-01-01

    Motivation Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. Results We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Conclusions Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid. PMID:25077808

  18. ACR/NEMA Digital Image Interface Standard (An Illustrated Protocol Overview)

    NASA Astrophysics Data System (ADS)

    Lawrence, G. Robert

    1985-09-01

    The American College of Radiologists (ACR) and the National Electrical Manufacturers Association (NEMA) have sponsored a joint standards committee mandated to develop a universal interface standard for the transfer of radiology images among a variety of PACS imaging devicesl. The resulting standard interface conforms to the ISO/OSI standard reference model for network protocol layering. The standard interface specifies the lower layers of the reference model (Physical, Data Link, Transport and Session) and implies a requirement of the Network Layer should a requirement for a network exist. The message content has been considered and a flexible message and image format specified. The following Imaging Equipment modalities are supported by the standard interface... CT Computed Tomograpy DS Digital Subtraction NM Nuclear Medicine US Ultrasound MR Magnetic Resonance DR Digital Radiology The following data types are standardized over the transmission interface media.... IMAGE DATA DIGITIZED VOICE HEADER DATA RAW DATA TEXT REPORTS GRAPHICS OTHERS This paper consists of text supporting the illustrated protocol data flow. Each layer will be individually treated. Particular emphasis will be given to the Data Link layer (Frames) and the Transport layer (Packets). The discussion utilizes a finite state sequential machine model for the protocol layers.

  19. SWE-based Observation Data Delivery from the Instrument to the User - Sensor Web Technology in the NeXOS Project

    NASA Astrophysics Data System (ADS)

    Jirka, Simon; del Rio, Joaquin; Toma, Daniel; Martinez, Enoc; Delory, Eric; Pearlman, Jay; Rieke, Matthes; Stasch, Christoph

    2017-04-01

    The rapidly evolving technology for building Web-based (spatial) information infrastructures and Sensor Webs, there are new opportunities to improve the process how ocean data is collected and managed. A central element in this development is the suite of Sensor Web Enablement (SWE) standards specified by the Open Geospatial Consortium (OGC). This framework of standards comprises on the one hand data models as well as formats for measurement data (ISO/OGC Observations and Measurement, O&M) and metadata describing measurement processes and sensors (OGC Sensor Model Language, SensorML). On the other hand the SWE standards comprise (Web service) interface specifications for pull-based access to observation data (OGC Sensor Observation Service, SOS) and for controlling or configuring sensors (OGC Sensor Planning Service, SPS). Also within the European INSPIRE framework the SWE standards play an important role as the SOS is the recommended download service interface for O&M-encoded observation data sets. In the context of the EU-funded Oceans of Tomorrow initiative the NeXOS (Next generation, Cost-effective, Compact, Multifunctional Web Enabled Ocean Sensor Systems Empowering Marine, Maritime and Fisheries Management) project is developing a new generation of in-situ sensors that make use of the SWE standards to facilitate the data publication process and the integration into Web based information infrastructures. This includes the development of a dedicated firmware for instruments and sensor platforms (SEISI, Smart Electronic Interface for Sensors and Instruments) maintained by the Universitat Politècnica de Catalunya (UPC). Among other features, SEISI makes use of OGC SWE standards such OGC-PUCK, to enable a plug-and-play mechanism for sensors based on SensorML encoded metadata. Thus, if a new instrument is attached to a SEISI-based platform, it automatically configures the connection to these instruments, automatically generated data files compliant with the ISO/OGC Observations and Measurements standard and initiates the data transmission into the NeXOS Sensor Web infrastructure. Besides these platform-related developments, NeXOS has realised the full path of data transmission from the sensor to the end user application. The conceptual architecture design is implemented by a series of open source SWE software packages provided by 52°North. This comprises especially different SWE server components (i.e. OGC Sensor Observation Service), tools for data visualisation (e.g. the 52°North Helgoland SOS viewer), and an editor for providing SensorML-based metadata (52°North smle). As a result, NeXOS has demonstrated how the SWE standards help to improve marine observation data collection. Within this presentation, we will present the experiences and findings of the NeXOS project and will provide recommendation for future work directions.

  20. A Standard-Driven Data Dictionary for Data Harmonization of Heterogeneous Datasets in Urban Geological Information Systems

    NASA Astrophysics Data System (ADS)

    Liu, G.; Wu, C.; Li, X.; Song, P.

    2013-12-01

    The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.

  1. Betty Petersen Memorial Library - NCWCP Publications - NWS

    Science.gov Websites

    . Polger P. Comparative Analysis of a New Integration Method With Certain Standard Methods (.PDF file) 52 file) 169 1978 Gerrity J. Elemental-Filter Design Considerations (.PDF file) 170 1978 Shuman F. G

  2. Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.

    PubMed

    Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C

    2018-01-17

    Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.

  3. STScI Archive Manual, Version 7.0

    NASA Astrophysics Data System (ADS)

    Padovani, Paolo

    1999-06-01

    The STScI Archive Manual provides information a user needs to know to access the HST archive via its two user interfaces: StarView and a World Wide Web (WWW) interface. It provides descriptions of the StarView screens used to access information in the database and the format of that information, and introduces the use to the WWW interface. Using the two interfaces, users can search for observations, preview public data, and retrieve data from the archive. Using StarView one can also find calibration reference files and perform detailed association searches. With the WWW interface archive users can access, and obtain information on, all Multimission Archive at Space Telescope (MAST) data, a collection of mainly optical and ultraviolet datasets which include, amongst others, the International Ultraviolet Explorer (IUE) Final Archive. Both interfaces feature a name resolver which simplifies searches based on target name.

  4. 75 FR 81601 - North American Electric Reliability Corporation; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... Electric Reliability Corporation; Notice of Compliance Filing December 20, 2010. Take notice that on December 1, 2010, North American Electric Reliability Corporation, in response to Paragraph 274 of the... Transfer Capability Reliability Standards. \\1\\ Mandatory Reliability Standards for the Calculation of...

  5. AMPS/PC - AUTOMATIC MANUFACTURING PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    The AMPS/PC system is a simulation tool designed to aid the user in defining the specifications of a manufacturing environment and then automatically writing code for the target simulation language, GPSS/PC. The domain of problems that AMPS/PC can simulate are manufacturing assembly lines with subassembly lines and manufacturing cells. The user defines the problem domain by responding to the questions from the interface program. Based on the responses, the interface program creates an internal problem specification file. This file includes the manufacturing process network flow and the attributes for all stations, cells, and stock points. AMPS then uses the problem specification file as input for the automatic code generator program to produce a simulation program in the target language GPSS. The output of the generator program is the source code of the corresponding GPSS/PC simulation program. The system runs entirely on an IBM PC running PC DOS Version 2.0 or higher and is written in Turbo Pascal Version 4 requiring 640K memory and one 360K disk drive. To execute the GPSS program, the PC must have resident the GPSS/PC System Version 2.0 from Minuteman Software. The AMPS/PC program was developed in 1988.

  6. Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis

    PubMed Central

    Ren, Huiying; Halvorsen, Michele B.; Deng, Zhiqun Daniel; Carlson, Thomas J.

    2012-01-01

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. In this paper, we provide a detailed description of a new software package, the Aquatic Acoustic Metrics Interface (AAMI), specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame. The features of the AAMI software are discussed, and several case studies are presented to illustrate its functionality. PMID:22969353

  7. AIAA spacecraft GN&C interface standards initiative: Overview

    NASA Technical Reports Server (NTRS)

    Challoner, A. Dorian

    1995-01-01

    The American Institute of Aeronautics and Astronautics (AIAA) has undertaken an important standards initiative in the area of spacecraft guidance, navigation, and control (GN&C) subsystem interfaces. The objective of this effort is to establish standards that will promote interchangeability of major GN&C components, thus enabling substantially lower spacecraft development costs. Although initiated by developers of conventional spacecraft GN&C, it is anticipated that interface standards will also be of value in reducing the development costs of micro-engineered spacecraft. The standardization targets are specifically limited to interfaces only, including information (i.e. data and signal), power, mechanical, thermal, and environmental interfaces between various GN&C components and between GN&C subsystems and other subsystems. The current emphasis is on information interfaces between various hardware elements (e.g., between star trackers and flight computers). The poster presentation will briefly describe the program, including the mechanics and schedule, and will publicize the technical products as they exist at the time of the conference. In particular, the rationale for the adoption of the AS1773 fiber-optic serial data bus and the status of data interface standards at the application layer will be presented.

  8. Corps Helicopter Attack Planning System (CHAPS). Positional Handbook. Appendix A. Messages. Appendix B. Statespace Construction Sample Session

    DTIC Science & Technology

    1989-10-01

    REVIEW MENU PROGRAM (S) CHAPS PURPOSE AND OVERVIEV The Do Review menu allows the user to select which missions to perform detailed analysis on and...input files must be resident on the computer you are running SUPR on. Any interface or file transfer programs must be successfully executed prior to... COMPUTER PROGRAM WAS DEVELOPED BY SYSTEMS CONTROL TECHNOLOGY FOR THE DEPUTY CHIEF OF STAFF/OPERATIONS,HQ USAFE. THE USE OF THE COMPUTER PROGRAM IS

  9. Inertial Manifolds for Navier-Stokes Equations and Related Dynamical Systems

    DTIC Science & Technology

    1991-05-31

    Graphics IRIS (SGI). The RLE files for the animation are loaded to an Abekas and recorded to tape by Betacam . This computational work was done by using the...scripts and comments, are loaded to the Abekas-A60 digital image storage device, and then recorded to the Betacam BVW-75 analog tape recorder. Static...interfacing, huge data files are output to the Data Vault parallelly with little cost. In addition to the SGIs, Abekas, Betacam and Solitaire, the

  10. Conversion of the Forces Mobilization Model (FORCEMOB) from FORTRAN to C

    DTIC Science & Technology

    2015-08-01

    300 K !’"vale Data 18.192 K 136 K Slack 2.560 K 84 K Mapped File 412 K 412 K Sharel!ble 5.444 K 4.440 K Managed Heap - r age Table l.klusable...the C version of FORCEMOB is ready for operational use. This page is intentionally blank. v Contents 1. Introduction...without a graphical user interface (GUI): once run, FORCEMOB reads user-created input files, performs mathematical operations upon them, and outputs text

  11. Chain of Custody Item Monitor Message Viewer v.1.0 Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, Steven Robert; Fielder, Laura; Hymel, Ross W.

    The CoCIM Message Viewer software allows users to connect to and download messages from a Chain of Custody Item Monitor (CoCIM) connected to a serial port on the user’s computer. The downloaded messages are authenticated and displayed in a Graphical User Interface that allows the user a limited degree of sorting and filtering of the downloaded messages as well as the ability to save downloaded files or to open previously downloaded message history files.

  12. Adaptation of the Camera Link Interface for Flight-Instrument Applications

    NASA Technical Reports Server (NTRS)

    Randall, David P.; Mahoney, John C.

    2010-01-01

    COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.

  13. Automating linear accelerator quality assurance.

    PubMed

    Eckhause, Tobias; Al-Hallaq, Hania; Ritter, Timothy; DeMarco, John; Farrey, Karl; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Perez, Mario; Park, SungYong; Booth, Jeremy T; Thorwarth, Ryan; Moran, Jean M

    2015-10-01

    The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The standard deviation in MLC position, as determined by EPID measurements, across the consortium was 0.33 mm for IMRT fields. With respect to the log files, the deviations between expected and actual positions for parameters were small (<0.12 mm) for all Linacs. Considering both log files and EPID measurements, all parameters were well within published tolerance values. Variations in collimator angle, MLC position, and gantry sag were also evaluated for all Linacs. The performance of the TrueBeam Linac model was shown to be consistent based on automated analysis of trajectory log files and EPID images acquired during delivery of a standardized test suite. The results can be compared directly to tolerance thresholds. In addition, sharing of results from standard tests across institutions can facilitate the identification of QA process and Linac changes. These reference values are presented along with the standard deviation for common tests so that the test suite can be used by other centers to evaluate their Linac performance against those in this consortium.

  14. Standards for the user interface - Developing a user consensus. [for Space Station Information System

    NASA Technical Reports Server (NTRS)

    Moe, Karen L.; Perkins, Dorothy C.; Szczur, Martha R.

    1987-01-01

    The user support environment (USE) which is a set of software tools for a flexible standard interactive user interface to the Space Station systems, platforms, and payloads is described in detail. Included in the USE concept are a user interface language, a run time environment and user interface management system, support tools, and standards for human interaction methods. The goals and challenges of the USE are discussed as well as a methodology based on prototype demonstrations for involving users in the process of validating the USE concepts. By prototyping the key concepts and salient features of the proposed user interface standards, the user's ability to respond is greatly enhanced.

  15. Standard Spacecraft Interfaces and IP Network Architectures: Prototyping Activities at the GSFC

    NASA Technical Reports Server (NTRS)

    Schnurr, Richard; Marquart, Jane; Lin, Michael

    2003-01-01

    Advancements in fright semiconductor technology have opened the door for IP-based networking in spacecraft architectures. The GSFC believes the same signlJicant cost savings gained using MIL-STD-1553/1773 as a standard low rate interface for spacecraft busses cun be realized for highspeed network interfaces. To that end, GSFC is developing hardware and software to support a seamless, space mission IP network based on Ethernet and MIL-STD-1553. The Ethernet network shall connect all fright computers and communications systems using interface standards defined by the CCSDS Standard Onboard InterFace (SOIF) Panel. This paper shall discuss the prototyping effort underway at GSFC and expected results.

  16. Size reduction techniques for vital compliant VHDL simulation models

    DOEpatents

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  17. Increasing the availability and usability of terrestrial ecology data through geospatial Web services and visualization tools (Invited)

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Wei, Y.

    2010-12-01

    Terrestrial ecology data sets are produced from diverse data sources such as model output, field data collection, laboratory analysis and remote sensing observation. These data sets can be created, distributed, and consumed in diverse ways as well. However, this diversity can hinder the usability of the data, and limit data users’ abilities to validate and reuse data for science and application purposes. Geospatial web services, such as those described in this paper, are an important means of reducing this burden. Terrestrial ecology researchers generally create the data sets in diverse file formats, with file and data structures tailored to the specific needs of their project, possibly as tabular data, geospatial images, or documentation in a report. Data centers may reformat the data to an archive-stable format and distribute the data sets through one or more protocols, such as FTP, email, and WWW. Because of the diverse data preparation, delivery, and usage patterns, users have to invest time and resources to bring the data into the format and structure most useful for their analysis. This time-consuming data preparation process shifts valuable resources from data analysis to data assembly. To address these issues, the ORNL DAAC, a NASA-sponsored terrestrial ecology data center, has utilized geospatial Web service technology, such as Open Geospatial Consortium (OGC) Web Map Service (WMS) and OGC Web Coverage Service (WCS) standards, to increase the usability and availability of terrestrial ecology data sets. Data sets are standardized into non-proprietary file formats and distributed through OGC Web Service standards. OGC Web services allow the ORNL DAAC to store data sets in a single format and distribute them in multiple ways and formats. Registering the OGC Web services through search catalogues and other spatial data tools allows for publicizing the data sets and makes them more available across the Internet. The ORNL DAAC has also created a Web-based graphical user interface called Spatial Data Access Tool (SDAT) that utilizes OGC Web services standards and allows data distribution and consumption for users not familiar with OGC standards. SDAT also allows for users to visualize the data set prior to download. Google Earth visualizations of the data set are also provided through SDAT. The use of OGC Web service standards at the ORNL DAAC has enabled an increase in data consumption. In one case, a data set had ~10 fold increase in download through OGC Web service in comparison to the conventional FTP and WWW method of access. The increase in download suggests that users are not only finding the data sets they need but also able to consume them readily in the format they need.

  18. 75 FR 6770 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-10

    ... interface with AUTOM via an Exchange approved proprietary electronic quoting device in eligible options to... to generate and submit option quotations electronically through AUTOM in eligible options to which...

  19. 75 FR 21375 - Self-Regulatory Organizations; NASDAQ OMX PHLX, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-23

    ... submit option quotations electronically through an electronic interface with AUTOM via an Exchange... quotations electronically through AUTOM in eligible options to which such RSQT has been assigned. An RSQT may...

  20. Design and implementation of fishery rescue data mart system

    NASA Astrophysics Data System (ADS)

    Pan, Jun; Huang, Haiguang; Liu, Yousong

    A novel data mart based system for fishery rescue field was designed and implemented. The system runs ETL process to deal with original data from various databases and data warehouses, and then reorganized the data into the fishery rescue data mart. Next, online analytical processing (OLAP) are carried out and statistical reports are generated automatically. Particularly, quick configuration schemes are designed to configure query dimensions and OLAP data sets. The configuration file will be transformed into statistic interfaces automatically through a wizard-style process. The system provides various forms of reporting files, including crystal reports, flash graphical reports, and two-dimensional data grids. In addition, a wizard style interface was designed to guide users customizing inquiry processes, making it possible for nontechnical staffs to access customized reports. Characterized by quick configuration, safeness and flexibility, the system has been successfully applied in city fishery rescue department.

  1. Integrated geometry and grid generation system for complex configurations

    NASA Technical Reports Server (NTRS)

    Akdag, Vedat; Wulf, Armin

    1992-01-01

    A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.

  2. AVE-SESAME program for the REEDA System

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1981-01-01

    The REEDA system software was modified and improved to process the AVE-SESAME severe storm data. A random access file system for the AVE storm data was designed, tested, and implemented. The AVE/SESAME software was modified to incorporate the random access file input and to interface with new graphics hardware/software now available on the REEDA system. Software was developed to graphically display the AVE/SESAME data in the convention normally used by severe storm researchers. Software was converted to AVE/SESAME software systems and interfaced with existing graphics hardware/software available on the REEDA System. Software documentation was provided for existing AVE/SESAME programs underlining functional flow charts and interacting questions. All AVE/SESAME data sets in random access format was processed to allow developed software to access the entire AVE/SESAME data base. The existing software was modified to allow for processing of different AVE/SESAME data set types including satellite surface and radar data.

  3. Translator for Optimizing Fluid-Handling Components

    NASA Technical Reports Server (NTRS)

    Landon, Mark; Perry, Ernest

    2007-01-01

    A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.

  4. Comparison of Geometric Design of a Brand of Stainless Steel K-Files: An In Vitro Study.

    PubMed

    Saeedullah, Maryam; Husain, Syed Wilayat

    2018-04-01

    The purpose of this experimental study was to determine the diametric variations of a brand of handheld stainless-steel K-files, acquired from different countries, in accordance with the available standards. 20 Mani stainless-steel K-files of identical size (ISO#25) were acquired from Pakistan and were designated as Group A while 20 Mani K-files were purchased from London, UK and designated as Group B. Files were assessed using profile projector Nikon B 24V. Data was statistically compared with ISO 3630:1 and ADA 101 by one sample T test. Significant difference was found between Groups A and B. Average discrepancy of Group A fell within the tolerance limit while that of Group B exceeded the limit. Findings in this study call attention towards adherence to the dimensional standards of stainless-steel endodontic files.

  5. 7 CFR 75.12 - When application deemed filed.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false When application deemed filed. 75.12 Section 75.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF...

  6. 7 CFR 75.12 - When application deemed filed.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false When application deemed filed. 75.12 Section 75.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF...

  7. 7 CFR 75.12 - When application deemed filed.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false When application deemed filed. 75.12 Section 75.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF...

  8. 7 CFR 75.12 - When application deemed filed.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false When application deemed filed. 75.12 Section 75.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF...

  9. 7 CFR 75.12 - When application deemed filed.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false When application deemed filed. 75.12 Section 75.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF...

  10. 36 CFR 8.8 - Filing of labor agreements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Filing of labor agreements. 8... LABOR STANDARDS APPLICABLE TO EMPLOYEES OF NATIONAL PARK SERVICE CONCESSIONERS § 8.8 Filing of labor...), concessioners shall file with the Director of the National Park Service a copy of each labor agreement in effect...

  11. Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach

    PubMed Central

    Bigdely-Shamlo, Nima; Makeig, Scott; Robbins, Kay A.

    2016-01-01

    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain–computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a “containerized” approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data “Levels,” each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org). PMID:27014048

  12. A Web Interface for Eco System Modeling

    NASA Astrophysics Data System (ADS)

    McHenry, K.; Kooper, R.; Serbin, S. P.; LeBauer, D. S.; Desai, A. R.; Dietze, M. C.

    2012-12-01

    We have developed the Predictive Ecosystem Analyzer (PEcAn) as an open-source scientific workflow system and ecoinformatics toolbox that manages the flow of information in and out of regional-scale terrestrial biosphere models, facilitates heterogeneous data assimilation, tracks data provenance, and enables more effective feedback between models and field research. The over-arching goal of PEcAn is to make otherwise complex analyses transparent, repeatable, and accessible to a diverse array of researchers, allowing both novice and expert users to focus on using the models to examine complex ecosystems rather than having to deal with complex computer system setup and configuration questions in order to run the models. Through the developed web interface we hide much of the data and model details and allow the user to simply select locations, ecosystem models, and desired data sources as inputs to the model. Novice users are guided by the web interface through setting up a model execution and plotting the results. At the same time expert users are given enough freedom to modify specific parameters before the model gets executed. This will become more important as more and more models are added to the PEcAn workflow as well as more and more data that will become available as NEON comes online. On the backend we support the execution of potentially computationally expensive models on different High Performance Computers (HPC) and/or clusters. The system can be configured with a single XML file that gives it the flexibility needed for configuring and running the different models on different systems using a combination of information stored in a database as well as pointers to files on the hard disk. While the web interface usually creates this configuration file, expert users can still directly edit it to fine tune the configuration.. Once a workflow is finished the web interface will allow for the easy creation of plots over result data while also allowing the user to download the results for further processing. The current workflow in the web interface is a simple linear workflow, but will be expanded to allow for more complex workflows. We are working with Kepler and Cyberintegrator to allow for these more complex workflows as well as collecting provenance of the workflow being executed. This provenance regarding model executions is stored in a database along with the derived results. All of this information is then accessible using the BETY database web frontend. The PEcAn interface.

  13. NESSUS/NASTRAN Interface

    NASA Technical Reports Server (NTRS)

    Millwater, Harry; Riha, David

    1996-01-01

    The NESSUS and NASTRAN computer codes were successfully integrated. The enhanced NESSUS code will use NASTRAN for the structural Analysis and NESSUS for the probabilistic analysis. Any quantities in the NASTRAN bulk data input can be random variables. Any NASTRAN result that is written to the output2 file can be returned to NESSUS as the finite element result. The interfacing between NESSUS and NASTRAN is handled automatically by NESSUS. NESSUS and NASTRAN can be run on different machines using the remote host option.

  14. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    PubMed

    Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Flight Planning in the Cloud

    NASA Technical Reports Server (NTRS)

    Flores, Sarah L.; Chapman, Bruce D.; Tung, Waye W.; Zheng, Yang

    2011-01-01

    This new interface will enable Principal Investigators (PIs), as well as UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar) members to do their own flight planning and time estimation without having to request flight lines through the science coordinator. It uses an all-in-one Google Maps interface, a JPL hosted database, and PI flight requirements to design an airborne flight plan. The application will enable users to see their own flight plan being constructed interactively through a map interface, and then the flight planning software will generate all the files necessary for the flight. Afterward, the UAVSAR team can then complete the flight request, including calendaring and supplying requisite flight request files in the expected format for processing by NASA s airborne science program. Some of the main features of the interface include drawing flight lines on the map, nudging them, adding them to the current flight plan, and reordering them. The user can also search and select takeoff, landing, and intermediate airports. As the flight plan is constructed, all of its components are constantly being saved to the database, and the estimated flight times are updated. Another feature is the ability to import flight lines from previously saved flight plans. One of the main motivations was to make this Web application as simple and intuitive as possible, while also being dynamic and robust. This Web application can easily be extended to support other airborne instruments.

  16. Accounting Data to Web Interface Using PERL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hargeaves, C

    2001-08-13

    This document will explain the process to create a web interface for the accounting information generated by the High Performance Storage Systems (HPSS) accounting report feature. The accounting report contains useful data but it is not easily accessed in a meaningful way. The accounting report is the only way to see summarized storage usage information. The first step is to take the accounting data, make it meaningful and store the modified data in persistent databases. The second step is to generate the various user interfaces, HTML pages, that will be used to access the data. The third step is tomore » transfer all required files to the web server. The web pages pass parameters to Common Gateway Interface (CGI) scripts that generate dynamic web pages and graphs. The end result is a web page with specific information presented in text with or without graphs. The accounting report has a specific format that allows the use of regular expressions to verify if a line is storage data. Each storage data line is stored in a detailed database file with a name that includes the run date. The detailed database is used to create a summarized database file that also uses run date in its name. The summarized database is used to create the group.html web page that includes a list of all storage users. Scripts that query the database folder to build a list of available databases generate two additional web pages. A master script that is run monthly as part of a cron job, after the accounting report has completed, manages all of these individual scripts. All scripts are written in the PERL programming language. Whenever possible data manipulation scripts are written as filters. All scripts are written to be single source, which means they will function properly on both the open and closed networks at LLNL. The master script handles the command line inputs for all scripts, file transfers to the web server and records run information in a log file. The rest of the scripts manipulate the accounting data or use the files created to generate HTML pages. Each script will be described in detail herein. The following is a brief description of HPSS taken directly from an HPSS web site. ''HPSS is a major development project, which began in 1993 as a Cooperative Research and Development Agreement (CRADA) between government and industry. The primary objective of HPSS is to move very large data objects between high performance computers, workstation clusters, and storage libraries at speeds many times faster than is possible with today's software systems. For example, HPSS can manage parallel data transfers from multiple network-connected disk arrays at rates greater than 1 Gbyte per second, making it possible to access high definition digitized video in real time.'' The HPSS accounting report is a canned report whose format is controlled by the HPSS developers.« less

  17. Operational Interoperable Web Coverage Service for Earth Observing Satellite Data: Issues and Lessons Learned

    NASA Astrophysics Data System (ADS)

    Yang, W.; Min, M.; Bai, Y.; Lynnes, C.; Holloway, D.; Enloe, Y.; di, L.

    2008-12-01

    In the past few years, there have been growing interests, among major earth observing satellite (EOS) data providers, in serving data through the interoperable Web Coverage Service (WCS) interface protocol, developed by the Open Geospatial Consortium (OGC). The interface protocol defined in WCS specifications allows client software to make customized requests of multi-dimensional EOS data, including spatial and temporal subsetting, resampling and interpolation, and coordinate reference system (CRS) transformation. A WCS server describes an offered coverage, i.e., a data product, through a response to a client's DescribeCoverage request. The description includes the offered coverage's spatial/temporal extents and resolutions, supported CRSs, supported interpolation methods, and supported encoding formats. Based on such information, a client can request the entire or a subset of coverage in any spatial/temporal resolutions and in any one of the supported CRSs, formats, and interpolation methods. When implementing a WCS server, a data provider has different approaches to present its data holdings to clients. One of the most straightforward, and commonly used, approaches is to offer individual physical data files as separate coverages. Such implementation, however, will result in too many offered coverages for large data holdings and it also cannot fully present the relationship among different, but spatially and/or temporally associated, data files. It is desirable to disconnect offered coverages from physical data files so that the former is more coherent, especially in spatial and temporal domains. Therefore, some servers offer one single coverage for a set of spatially coregistered time series data files such as a daily global precipitation coverage linked to many global single- day precipitation files; others offer one single coverage for multiple temporally coregistered files together forming a large spatial extent. In either case, a server needs to assemble an output coverage real-time by combining potentially large number of physical files, which can be operationally difficult. The task becomes more challenging if an offered coverage involves spatially and temporally un-registered physical files. In this presentation, we will discuss issues and lessons learned in providing NASA's AIRS Level 2 atmospheric products, which are in satellite swath CRS and in 6-minute segment granule files, as virtual global coverages. We"ll discuss the WCS server's on- the-fly georectification, mosaicking, quality screening, performance, and scalability.

  18. NELS 2.0 - A general system for enterprise wide information management

    NASA Technical Reports Server (NTRS)

    Smith, Stephanie L.

    1993-01-01

    NELS, the NASA Electronic Library System, is an information management tool for creating distributed repositories of documents, drawings, and code for use and reuse by the aerospace community. The NELS retrieval engine can load metadata and source files of full text objects, perform natural language queries to retrieve ranked objects, and create links to connect user interfaces. For flexibility, the NELS architecture has layered interfaces between the application program and the stored library information. The session manager provides the interface functions for development of NELS applications. The data manager is an interface between session manager and the structured data system. The center of the structured data system is the Wide Area Information Server. This system architecture provides access to information across heterogeneous platforms in a distributed environment. There are presently three user interfaces that connect to the NELS engine; an X-Windows interface, and ASCII interface and the Spatial Data Management System. This paper describes the design and operation of NELS as an information management tool and repository.

  19. Standardized Modular Power Interfaces for Future Space Explorations Missions

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard

    2015-01-01

    Earlier studies show that future human explorations missions are composed of multi-vehicle assemblies with interconnected electric power systems. Some vehicles are often intended to serve as flexible multi-purpose or multi-mission platforms. This drives the need for power architectures that can be reconfigured to support this level of flexibility. Power system developmental costs can be reduced, program wide, by utilizing a common set of modular building blocks. Further, there are mission operational and logistics cost benefits of using a common set of modular spares. These benefits are the goals of the Advanced Exploration Systems (AES) Modular Power System (AMPS) project. A common set of modular blocks requires a substantial level of standardization in terms of the Electrical, Data System, and Mechanical interfaces. The AMPS project is developing a set of proposed interface standards that will provide useful guidance for modular hardware developers but not needlessly constrain technology options, or limit future growth in capability. In 2015 the AMPS project focused on standardizing the interfaces between the elements of spacecraft power distribution and energy storage. The development of the modular power standard starts with establishing mission assumptions and ground rules to define design application space. The standards are defined in terms of AMPS objectives including Commonality, Reliability-Availability, Flexibility-Configurability and Supportability-Reusability. The proposed standards are aimed at assembly and sub-assembly level building blocks. AMPS plans to adopt existing standards for spacecraft command and data, software, network interfaces, and electrical power interfaces where applicable. Other standards including structural encapsulation, heat transfer, and fluid transfer, are governed by launch and spacecraft environments and bound by practical limitations of weight and volume. Developing these mechanical interface standards is more difficult but an essential part of defining physical building blocks of modular power. This presentation describes the AMPS projects progress towards standardized modular power interfaces.

  20. Standard Electronic Format Specification for Tank Characterization Data Loader Version 3.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ADAMS, M.R.

    2001-01-31

    The purpose of this document is to describe the standard electronic format for data files that will be sent for entry into the Tank Characterization Database (TCD). There are 2 different file types needed for each data load: (1) Analytical Results and (2) Sample Descriptions.

  1. 7 CFR 75.27 - How to file an appeal.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false How to file an appeal. 75.27 Section 75.27 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND...

  2. 7 CFR 75.39 - Use of file samples.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Use of file samples. 75.39 Section 75.39 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND TH...

  3. 7 CFR 75.39 - Use of file samples.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Use of file samples. 75.39 Section 75.39 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND TH...

  4. 7 CFR 75.27 - How to file an appeal.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false How to file an appeal. 75.27 Section 75.27 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND...

  5. 7 CFR 75.27 - How to file an appeal.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false How to file an appeal. 75.27 Section 75.27 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND...

  6. 7 CFR 75.39 - Use of file samples.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Use of file samples. 75.39 Section 75.39 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND TH...

  7. 7 CFR 75.27 - How to file an appeal.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false How to file an appeal. 75.27 Section 75.27 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND...

  8. DockoMatic 2.0: High Throughput Inverse Virtual Screening and Homology Modeling

    PubMed Central

    Bullock, Casey; Cornia, Nic; Jacob, Reed; Remm, Andrew; Peavey, Thomas; Weekes, Ken; Mallory, Chris; Oxford, Julia T.; McDougal, Owen M.; Andersen, Timothy L.

    2013-01-01

    DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly Graphical User Interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to: (1) conduct high throughput Inverse Virtual Screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying a receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories, and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELLER programs, and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education. PMID:23808933

  9. Battlefield decision aid for acoustical ground sensors with interface to meteorological data sources

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Noble, John M.; VanAartsen, Bruce H.; Szeto, Gregory L.

    2001-08-01

    The performance of acoustical ground sensors depends heavily on the local atmospheric and terrain conditions. This paper describes a prototype physics-based decision aid, called the Acoustic Battlefield Aid (ABFA), for predicting these environ-mental effects. ABFA integrates advanced models for acoustic propagation, atmospheric structure, and array signal process-ing into a convenient graphical user interface. The propagation calculations are performed in the frequency domain on user-definable target spectra. The solution method involves a parabolic approximation to the wave equation combined with a ter-rain diffraction model. Sensor performance is characterized with Cramer-Rao lower bounds (CRLBs). The CRLB calcula-tions include randomization of signal energy and wavefront orientation resulting from atmospheric turbulence. Available performance characterizations include signal-to-noise ratio, probability of detection, direction-finding accuracy for isolated receiving arrays, and location-finding accuracy for networked receiving arrays. A suite of integrated tools allows users to create new target descriptions from standard digitized audio files and to design new sensor array layouts. These tools option-ally interface with the ARL Database/Automatic Target Recognition (ATR) Laboratory, providing access to an extensive library of target signatures. ABFA also includes a Java-based capability for network access of near real-time data from sur-face weather stations or forecasts from the Army's Integrated Meteorological System. As an example, the detection footprint of an acoustical sensor, as it evolves over a 13-hour period, is calculated.

  10. FTOOLS: A general package of software to manipulate FITS files

    NASA Astrophysics Data System (ADS)

    Blackburn, J. K.; Shaw, R. A.; Payne, H. E.; Hayes, J. J. E.; Heasarc

    1999-12-01

    FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. The FTOOLS package contains many utility programs which perform modular tasks on any FITS image or table, as well as higher-level analysis programs designed specifically for data from current and past high energy astrophysics missions. The utility programs for FITS tables are especially rich and powerful, and provide functions for presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual FTOOLS programs can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. FTOOLS development began in 1991 and has produced the main set of data analysis software for the current ASCA and RXTE space missions and for other archival sets of X-ray and gamma-ray data. The FTOOLS software package is supported on most UNIX platforms and on Windows machines. The user interface is controlled by standard parameter files that are very similar to those used by IRAF. The package is self documenting through a stand alone help task called fhelp. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.

  11. XML-Based Generator of C++ Code for Integration With GUIs

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  12. ISMRM Raw data format: A proposed standard for MRI raw datasets.

    PubMed

    Inati, Souheil J; Naegele, Joseph D; Zwart, Nicholas R; Roopchansingh, Vinai; Lizak, Martin J; Hansen, David C; Liu, Chia-Ying; Atkinson, David; Kellman, Peter; Kozerke, Sebastian; Xue, Hui; Campbell-Washburn, Adrienne E; Sørensen, Thomas S; Hansen, Michael S

    2017-01-01

    This work proposes the ISMRM Raw Data format as a common MR raw data format, which promotes algorithm and data sharing. A file format consisting of a flexible header and tagged frames of k-space data was designed. Application Programming Interfaces were implemented in C/C++, MATLAB, and Python. Converters for Bruker, General Electric, Philips, and Siemens proprietary file formats were implemented in C++. Raw data were collected using magnetic resonance imaging scanners from four vendors, converted to ISMRM Raw Data format, and reconstructed using software implemented in three programming languages (C++, MATLAB, Python). Images were obtained by reconstructing the raw data from all vendors. The source code, raw data, and images comprising this work are shared online, serving as an example of an image reconstruction project following a paradigm of reproducible research. The proposed raw data format solves a practical problem for the magnetic resonance imaging community. It may serve as a foundation for reproducible research and collaborations. The ISMRM Raw Data format is a completely open and community-driven format, and the scientific community is invited (including commercial vendors) to participate either as users or developers. Magn Reson Med 77:411-421, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Visualizing NetCDF Files by Using the EverVIEW Data Viewer

    USGS Publications Warehouse

    Conzelmann, Craig; Romañach, Stephanie S.

    2010-01-01

    Over the past few years, modelers in South Florida have started using Network Common Data Form (NetCDF) as the standard data container format for storing hydrologic and ecologic modeling inputs and outputs. With its origins in the meteorological discipline, NetCDF was created by the Unidata Program Center at the University Corporation for Atmospheric Research, in conjunction with the National Aeronautics and Space Administration and other organizations. NetCDF is a portable, scalable, self-describing, binary file format optimized for storing array-based scientific data. Despite attributes which make NetCDF desirable to the modeling community, many natural resource managers have few desktop software packages which can consume NetCDF and unlock the valuable data contained within. The U.S. Geological Survey and the Joint Ecosystem Modeling group, an ecological modeling community of practice, are working to address this need with the EverVIEW Data Viewer. Available for several operating systems, this desktop software currently supports graphical displays of NetCDF data as spatial overlays on a three-dimensional globe and views of grid-cell values in tabular form. An included Open Geospatial Consortium compliant, Web-mapping service client and charting interface allows the user to view Web-available spatial data as additional map overlays and provides simple charting visualizations of NetCDF grid values.

  14. FDDI information management system for centralizing interactive, computerized multimedia clinical experiences in pediatric rheumatology/Immunology.

    PubMed

    Rouhani, R; Cronenberger, H; Stein, L; Hannum, W; Reed, A M; Wilhelm, C; Hsiao, H

    1995-01-01

    This paper describes the design, authoring, and development of interactive, computerized, multimedia clinical simulations in pediatric rheumatology/immunology and related musculoskeletal diseases, the development and implementation of a high speed information management system for their centralized storage and distribution, and analytical methods for evaluating the total system's educational impact on medical students and pediatric residents. An FDDI fiber optic network with client/server/host architecture is the core. The server houses digitized audio, still-image video clips and text files. A host station houses the DB2/2 database containing case-associated labels and information. Cases can be accessed from any workstation via a customized interface in AVA/2 written specifically for this application. OS/2 Presentation Manager controls, written in C, are incorporated into the interface. This interface allows SQL searches and retrievals of cases and case materials. In addition to providing user-directed clinical experiences, this centralized information management system provides designated faculty with the ability to add audio notes and visual pointers to image files. Users may browse through case materials, mark selected ones and download them for utilization in lectures or for editing and converting into 35mm slides.

  15. DXRaySMCS: a user-friendly interface developed for prediction of diagnostic radiology X-ray spectra produced by Monte Carlo (MCNP-4C) simulation.

    PubMed

    Bahreyni Toossi, M T; Moradi, H; Zare, H

    2008-01-01

    In this work, the general purpose Monte Carlo N-particle radiation transport computer code (MCNP-4C) was used for the simulation of X-ray spectra in diagnostic radiology. The electron's path in the target was followed until its energy was reduced to 10 keV. A user-friendly interface named 'diagnostic X-ray spectra by Monte Carlo simulation (DXRaySMCS)' was developed to facilitate the application of MCNP-4C code for diagnostic radiology spectrum prediction. The program provides a user-friendly interface for: (i) modifying the MCNP input file, (ii) launching the MCNP program to simulate electron and photon transport and (iii) processing the MCNP output file to yield a summary of the results (relative photon number per energy bin). In this article, the development and characteristics of DXRaySMCS are outlined. As part of the validation process, output spectra for 46 diagnostic radiology system settings produced by DXRaySMCS were compared with the corresponding IPEM78. Generally, there is a good agreement between the two sets of spectra. No statistically significant differences have been observed between IPEM78 reported spectra and the simulated spectra generated in this study.

  16. User's manual for the HYPGEN hyperbolic grid generator and the HGUI graphical user interface

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Chiu, Ing-Tsau; Buning, Pieter G.

    1993-01-01

    The HYPGEN program is used to generate a 3-D volume grid over a user-supplied single-block surface grid. This is accomplished by solving the 3-D hyperbolic grid generation equations consisting of two orthogonality relations and one cell volume constraint. In this user manual, the required input files and parameters and output files are described. Guidelines on how to select the input parameters are given. Illustrated examples are provided showing a variety of topologies and geometries that can be treated. HYPGEN can be used in stand-alone mode as a batch program or it can be called from within a graphical user interface HGUI that runs on Silicon Graphics workstations. This user manual provides a description of the menus, buttons, sliders, and typein fields in HGUI for users to enter the parameters needed to run HYPGEN. Instructions are given on how to configure the interface to allow HYPGEN to run either locally or on a faster remote machine through the use of shell scripts on UNIX operating systems. The volume grid generated is copied back to the local machine for visualization using a built-in hook to PLOT3D.

  17. Spacelab payload accommodation handbook. Appendix B: Structure interface definition module

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The mechanical interfaces between Spacelab and its payload are defined. The envelopes available for mounting payload hardware are specified together with the standard structural attachment interfaces. Overall load capabilities and the local load capabilities for individual attachment interfaces are defined for the standard mounting locations. The mechanical environment is defined and the mechanical interfaces between the payload and the EPDS, CDMS and ECS are included.

  18. The Standard Autonomous File Server, A Customized, Off-the-Shelf Success Story

    NASA Technical Reports Server (NTRS)

    Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper describes the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system has been so successful; it is becoming a NASA standard resource, leading to its nomination for NASA's Software of the Year Award in 1999.

  19. ConKit: a python interface to contact predictions.

    PubMed

    Simkovic, Felix; Thomas, Jens M H; Rigden, Daniel J

    2017-07-15

    Recent advances in protein residue contact prediction algorithms have led to the emergence of many new methods and a variety of file formats. We present ConKit , an open source, modular and extensible Python interface which allows facile conversion between formats and provides an interface to analyses of sequence alignments and sets of contact predictions. ConKit is available via the Python Package Index. The documentation can be found at http://www.conkit.org . ConKit is licensed under the BSD 3-Clause. hlfsimko@liverpool.ac.uk or drigden@liverpool.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  20. Biotool2Web: creating simple Web interfaces for bioinformatics applications.

    PubMed

    Shahid, Mohammad; Alam, Intikhab; Fuellen, Georg

    2006-01-01

    Currently there are many bioinformatics applications being developed, but there is no easy way to publish them on the World Wide Web. We have developed a Perl script, called Biotool2Web, which makes the task of creating web interfaces for simple ('home-made') bioinformatics applications quick and easy. Biotool2Web uses an XML document containing the parameters to run the tool on the Web, and generates the corresponding HTML and common gateway interface (CGI) files ready to be published on a web server. This tool is available for download at URL http://www.uni-muenster.de/Bioinformatics/services/biotool2web/ Georg Fuellen (fuellen@alum.mit.edu).

  1. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers.

    PubMed

    Collignon, Barbara; Schulz, Roland; Smith, Jeremy C; Baudry, Jerome

    2011-04-30

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  2. MATLAB software for viewing and processing u-channel and discrete sample paleomagnetic data: UPmag and DPmag

    NASA Astrophysics Data System (ADS)

    Xuan, C.; Channell, J. E.

    2009-12-01

    With the increasing efficiency of acquiring paleomagnetic data from u-channel or discrete samples, large volumes of data can be accumulated within a short time period. It is often critical to visualize and process these data in “real time” as measurements proceed, so that the measurement plan can be dictated accordingly. New MATLABTM software, UPmag and DPmag, are introduced for easy and rapid analysis of natural remanent magnetization (NRM) and laboratory-induced remanent magnetization data for u-channel and discrete samples, respectively. UPmag comprises three MATLABTM graphic user interfaces: UVIEW, UDIR, and UINT. UVIEW allows users to open and check through measurement data from the magnetometer as well as to correct detected flux-jumps in the data, and to export files for further treatment. UDIR reads the *.dir file generated by UVIEW, automatically calculates component directions using selectable demagnetization range(s) with anchored or free origin, and displays orthogonal projections and stepwise intensity plots for any position along the u-channel sample. UDIR can also display data on equal area stereographic projections and draw virtual geomagnetic poles (VGP) on various map projections. UINT provides a convenient platform to evaluate relative paleointensity estimates using the *.int files that can be exported from UVIEW. DPmag comprises two MATLABTM graphic user interfaces: DDIR and DFISHER. DDIR reads output files from the discrete sample magnetometer measurement system. DDIR allows users to calculate component directions for each discrete sample, to plot the demagnetization data on orthogonal projections and equal area projections, as well as to show the stepwise intensity data. DFISHER reads the *.pca file exported from DDIR, calculates VGP and Fisher statistics for data from selected groups of samples, and plots the results on equal area projections and as VGPs on a range of map projections. Data and plots from UPmag and DPmag can be exported to various file formats.

  3. Orbiter middeck/payload standard interfaces control document

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The interfaces which shall be provided by the baseline shuttle mid-deck for payload use within the mid-deck area are defined, as well as all constraints which shall be observed by all the users of the defined interfaces. Commonality was established with respect to analytical approaches, analytical models, technical data and definitions for integrated analyses by all the interfacing parties. Any payload interfaces that are out of scope with the standard interfaces defined shall be defined in a Payload Unique Interface Control Document (ICD) for a given payload. Each Payload Unique ICD will have comparable paragraphs to this ICD and will have a corresponding notation of A, for applicable; N/A, for not applicable; N, for note added for explanation; and E, for exception. On any flight, the STS reserves the right to assign locations to both payloads mounted on an adapter plate(s) and payloads stored within standard lockers. Specific locations requests and/or requirements exceeding standard mid-deck payload requirements may result in a reduction in manifesting opportunities.

  4. 75 FR 35011 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-21

    ... Processes Manual Incorporating Proposed Revisions to the Reliability Standards Development Process. Filed..., June 21, 2010. Take notice that the Commission received the following electric reliability filings: Docket Numbers: RR10-12-000. Applicants: North American Electric Reliability Corp. Description: Petition...

  5. 37 CFR 401.16 - Electronic filing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Electronic filing. 401.16 Section 401.16 Patents, Trademarks, and Copyrights NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY... GOVERNMENT GRANTS, CONTRACTS, AND COOPERATIVE AGREEMENTS § 401.16 Electronic filing. Unless otherwise...

  6. 37 CFR 401.16 - Electronic filing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Electronic filing. 401.16 Section 401.16 Patents, Trademarks, and Copyrights NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY... GOVERNMENT GRANTS, CONTRACTS, AND COOPERATIVE AGREEMENTS § 401.16 Electronic filing. Unless otherwise...

  7. CFDP: The Revised Standard and Some Handy Lab Tools

    NASA Astrophysics Data System (ADS)

    Montesinos, Juan Antonio; Valverde, Alberto; Taylor, Chris; Magistrati, Giorgio

    2014-08-01

    The original recommendation for the CCSDS File Delivery Protocol (CFDP) was published in 2002 and since then it has been adopted by many NASA missions for transferring files to and from the flight segment. Conversely, ESA missions have tended to rely on adaptation of the ECSS Packet Utilisation Standard. However, there are now ESA missions under design that will be using CFDP as the standard mechanism for file transfer. The first mission that is using CFDP as File Transfer Protocol is Euclid, to be launch in 2020 and destined to orbit the second Lagrange point (L2). The CFDP engine will be integrated in the Euclid mass memory, allowing the large data files produced by the scientific instruments to be directly downloaded on a KA band link. Moreover, it has also been proposed to be used in the JUICE mission, that will study the Jupiter moons. Due to the considerable distance from Earth, Juice has extremely challenging data transfer requirements but due to the flexibility of CFDP the requirements of both missions can be met.This report aims at presenting an overview of CFDP, the new modifications presently proposed to the standard and the tools that in the Data System division at ESTEC are using for simulation, testing and verification.

  8. ISA-TAB-Nano: a specification for sharing nanomaterial research data in spreadsheet-based format.

    PubMed

    Thomas, Dennis G; Gaheen, Sharon; Harper, Stacey L; Fritts, Martin; Klaessig, Fred; Hahn-Dantona, Elizabeth; Paik, David; Pan, Sue; Stafford, Grace A; Freund, Elaine T; Klemm, Juli D; Baker, Nathan A

    2013-01-14

    The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption.

  9. ISA-TAB-Nano: A Specification for Sharing Nanomaterial Research Data in Spreadsheet-based Format

    PubMed Central

    2013-01-01

    Background and motivation The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. Results We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. Conclusion The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption. PMID:23311978

  10. SEGY to ASCII: Conversion and Plotting Program

    USGS Publications Warehouse

    Goldman, Mark R.

    1999-01-01

    This report documents a computer program to convert standard 4 byte, IBM floating point SEGY files to ASCII xyz format. The program then optionally plots the seismic data using the GMT plotting package. The material for this publication is contained in a standard tar file (of99-126.tar) that is uncompressed and 726 K in size. It can be downloaded by any Unix machine. Move the tar file to the directory you wish to use it in, then type 'tar xvf of99-126.tar' The archive files (and diskette) contain a NOTE file, a README file, a version-history file, source code, a makefile for easy compilation, and an ASCII version of the documentation. The archive files (and diskette) also contain example test files, including a typical SEGY file along with the resulting ASCII xyz and postscript files. Requirements for compiling the source code into an executable are a C++ compiler. The program has been successfully compiled using Gnu's g++ version 2.8.1, and use of other compilers may require modifications to the existing source code. The g++ compiler is a free, high quality C++ compiler and may be downloaded from the ftp site: ftp://ftp.gnu.org/gnu Requirements for plotting the seismic data is the existence of the GMT plotting package. The GMT plotting package may be downloaded from the web site: http://www.soest.hawaii.edu/gmt/

  11. Generation new MP3 data set after compression

    NASA Astrophysics Data System (ADS)

    Atoum, Mohammed Salem; Almahameed, Mohammad

    2016-02-01

    The success of audio steganography techniques is to ensure imperceptibility of the embedded secret message in stego file and withstand any form of intentional or un-intentional degradation of secret message (robustness). Crucial to that using digital audio file such as MP3 file, which comes in different compression rate, however research studies have shown that performing steganography in MP3 format after compression is the most suitable one. Unfortunately until now the researchers can not test and implement their algorithm because no standard data set in MP3 file after compression is generated. So this paper focuses to generate standard data set with different compression ratio and different Genre to help researchers to implement their algorithms.

  12. Leveraging Open Standard Interfaces in Providing Efficient Discovery, Retrieval, and Information of NASA-Sponsored Observations and Predictions

    NASA Astrophysics Data System (ADS)

    Cole, M.; Alameh, N.; Bambacus, M.

    2006-05-01

    The Applied Sciences Program at NASA focuses on extending the results of NASA's Earth-Sun system science research beyond the science and research communities to contribute to national priority applications with societal benefits. By employing a systems engineering approach, supporting interoperable data discovery and access, and developing partnerships with federal agencies and national organizations, the Applied Sciences Program facilitates the transition from research to operations in national applications. In particular, the Applied Sciences Program identifies twelve national applications, listed at http://science.hq.nasa.gov/earth-sun/applications/, which can be best served by the results of NASA aerospace research and development of science and technologies. The ability to use and integrate NASA data and science results into these national applications results in enhanced decision support and significant socio-economic benefits for each of the applications. This paper focuses on leveraging the power of interoperability and specifically open standard interfaces in providing efficient discovery, retrieval, and integration of NASA's science research results. Interoperability (the ability to access multiple, heterogeneous geoprocessing environments, either local or remote by means of open and standard software interfaces) can significantly increase the value of NASA-related data by increasing the opportunities to discover, access and integrate that data in the twelve identified national applications (particularly in non-traditional settings). Furthermore, access to data, observations, and analytical models from diverse sources can facilitate interdisciplinary and exploratory research and analysis. To streamline this process, the NASA GeoSciences Interoperability Office (GIO) is developing the NASA Earth-Sun System Gateway (ESG) to enable access to remote geospatial data, imagery, models, and visualizations through open, standard web protocols. The gateway (online at http://esg.gsfc.nasa.gov) acts as a flexible and searchable registry of NASA-related resources (files, services, models, etc) and allows scientists, decision makers and others to discover and retrieve a wide variety of observations and predictions of natural and human phenomena related to Earth Science from NASA and other sources. To support the goals of the Applied Sciences national applications, GIO staff is also working with the national applications communities to identify opportunities where open standards-based discovery and access to NASA data can enhance the decision support process of the national applications. This paper describes the work performed to-date on that front, and summarizes key findings in terms of identified data sources and benefiting national applications. The paper also highlights the challenges encountered in making NASA-related data accessible in a cross-cutting fashion and identifies areas where interoperable approaches can be leveraged.

  13. 75 FR 14386 - Interpretation of Transmission Planning Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ... created electronically using word processing software should be filed in native applications or print-to.... FERC, 564 F.3d 1342 (DC Cir. 2009). \\6\\ Mandatory Reliability Standards for the Bulk-Power System... print-to-PDF format and not in a scanned format. Commenters filing electronically do not need to make a...

  14. ';Best' Practices for Aggregating Subset Results from Archived Datasets

    NASA Astrophysics Data System (ADS)

    Baskin, W. E.; Perez, J.

    2013-12-01

    In response to the exponential growth in science data analysis and visualization capabilities Data Centers have been developing new delivery mechanisms to package and deliver large volumes of aggregated subsets of archived data. New standards are evolving to help data providers and application programmers deal with growing needs of the science community. These standards evolve from the best practices gleaned from new products and capabilities. The NASA Atmospheric Sciences Data Center (ASDC) has developed and deployed production provider-specific search and subset web applications for the CALIPSO, CERES, TES, and MOPITT missions. This presentation explores several use cases that leverage aggregated subset results and examines the standards and formats ASDC developers applied to the delivered files as well as the implementation strategies for subsetting and processing the aggregated products. The following topics will be addressed: - Applications of NetCDF CF conventions to aggregated level 2 satellite subsets - Data-Provider-Specific format requirements vs. generalized standards - Organization of the file structure of aggregated NetCDF subset output - Global Attributes of individual subsetted files vs. aggregated results - Specific applications and framework used for subsetting and delivering derivative data files

  15. The Jade File System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rao, Herman Chung-Hwa

    1991-01-01

    File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its design. The prototype consists of interfaces to the Unix File System, the Sun Network File System, and the File Transfer Protocol.

  16. XMGR5 users manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, K.R.; Fisher, J.E.

    1997-03-01

    ACE/gr is XY plotting tool for workstations or X-terminals using X. A few of its features are: User defined scaling, tick marks, labels, symbols, line styles, colors. Batch mode for unattended plotting. Read and write parameters used during a session. Polynomial regression, splines, running averages, DFT/FFT, cross/auto-correlation. Hardcopy support for PostScript, HP-GL, and FrameMaker.mif format. While ACE/gr has a convenient point-and-click interface, most parameter settings and operations are available through a command line interface (found in Files/Commands).

  17. Dynamic Non-Hierarchical File Systems for Exascale Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Darrell E.; Miller, Ethan L

    This constitutes the final report for “Dynamic Non-Hierarchical File Systems for Exascale Storage”. The ultimate goal of this project was to improve data management in scientific computing and high-end computing (HEC) applications, and to achieve this goal we proposed: to develop the first, HEC-targeted, file system featuring rich metadata and provenance collection, extreme scalability, and future storage hardware integration as core design goals, and to evaluate and develop a flexible non-hierarchical file system interface suitable for providing more powerful and intuitive data management interfaces to HEC and scientific computing users. Data management is swiftly becoming a serious problem in themore » scientific community – while copious amounts of data are good for obtaining results, finding the right data is often daunting and sometimes impossible. Scientists participating in a Department of Energy workshop noted that most of their time was spent “...finding, processing, organizing, and moving data and it’s going to get much worse”. Scientists should not be forced to become data mining experts in order to retrieve the data they want, nor should they be expected to remember the naming convention they used several years ago for a set of experiments they now wish to revisit. Ideally, locating the data you need would be as easy as browsing the web. Unfortunately, existing data management approaches are usually based on hierarchical naming, a 40 year-old technology designed to manage thousands of files, not exabytes of data. Today’s systems do not take advantage of the rich array of metadata that current high-end computing (HEC) file systems can gather, including content-based metadata and provenance1 information. As a result, current metadata search approaches are typically ad hoc and often work by providing a parallel management system to the “main” file system, as is done in Linux (the locate utility), personal computers, and enterprise search appliances. These search applications are often optimized for a single file system, making it difficult to move files and their metadata between file systems. Users have tried to solve this problem in several ways, including the use of separate databases to index file properties, the encoding of file properties into file names, and separately gathering and managing provenance data, but none of these approaches has worked well, either due to limited usefulness or scalability, or both. Our research addressed several key issues: High-performance, real-time metadata harvesting: extracting important attributes from files dynamically and immediately updating indexes used to improve search; Transparent, automatic, and secure provenance capture: recording the data inputs and processing steps used in the production of each file in the system; Scalable indexing: indexes that are optimized for integration with the file system; Dynamic file system structure: our approach provides dynamic directories similar to those in semantic file systems, but these are the native organization rather than a feature grafted onto a conventional system. In addition to these goals, our research effort will include evaluating the impact of new storage technologies on the file system design and performance. In particular, the indexing and metadata harvesting functions can potentially benefit from the performance improvements promised by new storage class memories.« less

  18. Highly parallel reconfigurable computer architecture for robotic computation having plural processor cells each having right and left ensembles of plural processors

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1994-01-01

    In a computer having a large number of single-instruction multiple data (SIMD) processors, each of the SIMD processors has two sets of three individual processor elements controlled by a master control unit and interconnected among a plurality of register file units where data is stored. The register files input and output data in synchronism with a minor cycle clock under control of two slave control units controlling the register file units connected to respective ones of the two sets of processor elements. Depending upon which ones of the register file units are enabled to store or transmit data during a particular minor clock cycle, the processor elements within an SIMD processor are connected in rings or in pipeline arrays, and may exchange data with the internal bus or with neighboring SIMD processors through interface units controlled by respective ones of the two slave control units.

  19. TabSQL: a MySQL tool to facilitate mapping user data to public databases.

    PubMed

    Xia, Xiao-Qin; McClelland, Michael; Wang, Yipeng

    2010-06-23

    With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data.

  20. TabSQL: a MySQL tool to facilitate mapping user data to public databases

    PubMed Central

    2010-01-01

    Background With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. Results We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. Conclusions TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data. PMID:20573251

Top