Transported Geothermal Energy Technoeconomic Screening Tool - Calculation Engine
Liu, Xiaobing
2016-09-21
This calculation engine estimates technoeconomic feasibility for transported geothermal energy projects. The TGE screening tool (geotool.exe) takes input from input file (input.txt), and list results into output file (output.txt). Both the input and ouput files are in the same folder as the geotool.exe. To use the tool, the input file containing adequate information of the case should be prepared in the format explained below, and the input file should be put into the same folder as geotool.exe. Then the geotool.exe can be executed, which will generate a output.txt file in the same folder containing all key calculation results. The format and content of the output file is explained below as well.
NASA Astrophysics Data System (ADS)
Foster, K.
1994-09-01
This document is a description of a computer program called Format( )MEDIC( )Input. The purpose of this program is to allow the user to quickly reformat wind velocity data in the Model Evaluation Database (MEDb) into a reasonable 'first cut' set of MEDIC input files (MEDIC.nml, StnLoc.Met, and Observ.Met). The user is cautioned that these resulting input files must be reviewed for correctness and completeness. This program will not format MEDb data into a Problem Station Library or Problem Metdata File. A description of how the program reformats the data is provided, along with a description of the required and optional user input and a description of the resulting output files. A description of the MEDb is not provided here but can be found in the RAS Division Model Evaluation Database Description document.
Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data
NASA Technical Reports Server (NTRS)
Maine, Richard E.
1987-01-01
This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.
PATSTAGS - PATRAN-STAGSC-1 TRANSLATOR
NASA Technical Reports Server (NTRS)
Otte, N. E.
1994-01-01
PATSTAGS translates PATRAN finite model data into STAGS (Structural Analysis of General Shells) input records to be used for engineering analysis. The program reads data from a PATRAN neutral file and writes STAGS input records into a STAGS input file and a UPRESS data file. It is able to support translations of nodal constraints, nodal, element, force and pressure data. PATSTAGS uses three files: the PATRAN neutral file to be translated, a STAGS input file and a STAGS pressure data file. The user provides the names for the neutral file and the desired names of the STAGS files to be created. The pressure data file contains the element live pressure data used in the STAGS subroutine UPRESS. PATSTAGS is written in FORTRAN 77 for DEC VAX series computers running VMS. The main memory requirement for execution is approximately 790K of virtual memory. Output blocks can be modified to output the data in any format desired, allowing the program to be used to translate model data to analysis codes other than STAGSC-1 (HQN-10967). This program is available in DEC VAX BACKUP format on a 9-track magnetic tape or TK50 tape cartridge. Documentation is included in the price of the program. PATSTAGS was developed in 1990. DEC, VAX, TK50 and VMS are trademarks of Digital Equipment Corporation.
User's Guide for the Updated EST/BEST Software System
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2003-01-01
This User's Guide describes the structure of the IPACS input file that reflects the modularity of each module. The structured format helps the user locate specific input data and manually enter or edit it. The IPACS input file can have any user-specified filename, but must have a DAT extension. The input file may consist of up to six input data blocks; the data blocks must be separated by delimiters beginning with the $ character. If multiple sections are desired, they must be arranged in the order listed.
NLEdit: A generic graphical user interface for Fortran programs
NASA Technical Reports Server (NTRS)
Curlett, Brian P.
1994-01-01
NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.
Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations
Buscheck, Thomas A.
2012-01-01
Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk : FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.
Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations
Buscheck, Thomas A.
2000-01-01
Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk: FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.
Terrestrial Investigation Model, TIM, has several appendices to its user guide. This is the appendix that includes an example input file in its preserved format. Both parameters and comments defining them are included.
Transferable Output ASCII Data (TOAD) gateway: Version 1.0 user's guide
NASA Technical Reports Server (NTRS)
Bingel, Bradford D.
1991-01-01
The Transferable Output ASCII Data (TOAD) Gateway, release 1.0 is described. This is a software tool for converting tabular data from one format into another via the TOAD format. This initial release of the Gateway allows free data interchange among the following file formats: TOAD; Standard Interface File (SIF); Program to Optimize Simulated Trajectories (POST) input; Comma Separated Value (TSV); and a general free-form file format. As required, additional formats can be accommodated quickly and easily.
Chao, Tian-Jy; Kim, Younghun
2015-02-03
Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.
C2x: A tool for visualisation and input preparation for CASTEP and other electronic structure codes
NASA Astrophysics Data System (ADS)
Rutter, M. J.
2018-04-01
The c2x code fills two distinct roles. Its first role is in acting as a converter between the binary format .check files from the widely-used CASTEP [1] electronic structure code and various visualisation programs. Its second role is to manipulate and analyse the input and output files from a variety of electronic structure codes, including CASTEP, ONETEP and VASP, as well as the widely-used 'Gaussian cube' file format. Analysis includes symmetry analysis, and manipulation arbitrary cell transformations. It continues to be under development, with growing functionality, and is written in a form which would make it easy to extend it to working directly with files from other electronic structure codes. Data which c2x is capable of extracting from CASTEP's binary checkpoint files include charge densities, spin densities, wavefunctions, relaxed atomic positions, forces, the Fermi level, the total energy, and symmetry operations. It can recreate .cell input files from checkpoint files. Volumetric data can be output in formats useable by many common visualisation programs, and c2x will itself calculate integrals, expand data into supercells, and interpolate data via combinations of Fourier and trilinear interpolation. It can extract data along arbitrary lines (such as lines between atoms) as 1D output. C2x is able to convert between several common formats for describing molecules and crystals, including the .cell format of CASTEP. It can construct supercells, reduce cells to their primitive form, and add specified k-point meshes. It uses the spglib library [2] to report symmetry information, which it can add to .cell files. C2x is a command-line utility, so is readily included in scripts. It is available under the GPL and can be obtained from http://www.c2x.org.uk. It is believed to be the only open-source code which can read CASTEP's .check files, so it will have utility in other projects.
CARE3MENU- A CARE III USER FRIENDLY INTERFACE
NASA Technical Reports Server (NTRS)
Pierce, J. L.
1994-01-01
CARE3MENU generates an input file for the CARE III program. CARE III is used for reliability prediction of complex, redundant, fault-tolerant systems including digital computers, aircraft, nuclear and chemical control systems. The CARE III input file often becomes complicated and is not easily formatted with a text editor. CARE3MENU provides an easy, interactive method of creating an input file by automatically formatting a set of user-supplied inputs for the CARE III system. CARE3MENU provides detailed on-line help for most of its screen formats. The reliability model input process is divided into sections using menu-driven screen displays. Each stage, or set of identical modules comprising the model, must be identified and described in terms of number of modules, minimum number of modules for stage operation, and critical fault threshold. The fault handling and fault occurence models are detailed in several screens by parameters such as transition rates, propagation and detection densities, Weibull or exponential characteristics, and model accuracy. The system fault tree and critical pairs fault tree screens are used to define the governing logic and to identify modules affected by component failures. Additional CARE3MENU screens prompt the user for output options and run time control values such as mission time and truncation values. There are fourteen major screens, many with default values and HELP options. The documentation includes: 1) a users guide with several examples of CARE III models, the dialog required to input them to CARE3MENU, and the output files created; and 2) a maintenance manual for assistance in changing the HELP files and modifying any of the menu formats or contents. CARE3MENU is written in FORTRAN 77 for interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985.
NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention
[[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Tian-Jy; Kim, Younghun
Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function tomore » convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.« less
Converting from DDOR SASF to APF
NASA Technical Reports Server (NTRS)
Gladden, Roy E.; Khanampompan, Teerapat; Fisher, Forest W.
2008-01-01
A computer program called ddor_sasf2apf converts delta-door (delta differential one-way range) request from an SASF (spacecraft activity sequence file) format to an APF (apgen plan file) format for use in the Mars Reconnaissance Orbiter (MRO) missionplanning- and-sequencing process. The APF is used as an input to APGEN/AUTOGEN in the MRO activity- planning and command-sequencegenerating process to sequence the delta-door (DDOR) activity. The DDOR activity is a spacecraft tracking technique for determining spacecraft location. The input to ddor_sasf2apf is an input request SASF provided by an observation team that utilizes DDOR. ddor_sasf2apf parses this DDOR SASF input, rearranging parameters and reformatting the request to produce an APF file for use in AUTOGEN and/or APGEN. The benefit afforded by ddor_sasf2apf is to enable the use of the DDOR SASF file earlier in the planning stage of the command-sequence-generating process and to produce sequences, optimized for DDOR operations, that are more accurate and more robust than would otherwise be possible.
Tool for Merging Proposals Into DSN Schedules
NASA Technical Reports Server (NTRS)
Khanampornpan, Teerapat; Kwok, John; Call, Jared
2008-01-01
A Practical Extraction and Reporting Language (Perl) script called merge7da has been developed to facilitate determination, by a project scheduler in NASA's Deep Space Network, of whether a proposal for use of the DSN could create a conflict with the current DSN schedule. Prior to the development of merge7da, there was no way to quickly identify potential schedule conflicts: it was necessary to submit a proposal and wait a day or two for a response from a DSN scheduling facility. By using merge7da to detect and eliminate potential schedule conflicts before submitting a proposal, a project scheduler saves time and gains assurance that the proposal will probably be accepted. merge7da accepts two input files, one of which contains the current DSN schedule and is in a DSN-standard format called '7da'. The other input file contains the proposal and is in another DSN-standard format called 'C1/C2'. merge7da processes the two input files to produce a merged 7da-format output file that represents the DSN schedule as it would be if the proposal were to be adopted. This 7da output file can be loaded into various DSN scheduling software tools now in use.
FEQinput—An editor for the full equations (FEQ) hydraulic modeling system
Ancalle, David S.; Ancalle, Pablo J.; Domanski, Marian M.
2017-10-30
IntroductionThe Full Equations Model (FEQ) is a computer program that solves the full, dynamic equations of motion for one-dimensional unsteady hydraulic flow in open channels and through control structures. As a result, hydrologists have used FEQ to design and operate flood-control structures, delineate inundation maps, and analyze peak-flow impacts. To aid in fighting floods, hydrologists are using the software to develop a system that uses flood-plain models to simulate real-time streamflow.Input files for FEQ are composed of text files that contain large amounts of parameters, data, and instructions that are written in a format exclusive to FEQ. Although documentation exists that can aid in the creation and editing of these input files, new users face a steep learning curve in order to understand the specific format and language of the files.FEQinput provides a set of tools to help a new user overcome the steep learning curve associated with creating and modifying input files for the FEQ hydraulic model and the related utility tool, Full Equations Utilities (FEQUTL).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.
1979-07-01
User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated.
ProMC: Input-output data format for HEP applications using varint encoding
NASA Astrophysics Data System (ADS)
Chekanov, S. V.; May, E.; Strand, K.; Van Gemmeren, P.
2014-10-01
A new data format for Monte Carlo (MC) events, or any structural data, including experimental data, is discussed. The format is designed to store data in a compact binary form using variable-size integer encoding as implemented in the Google's Protocol Buffers package. This approach is implemented in the PROMC library which produces smaller file sizes for MC records compared to the existing input-output libraries used in high-energy physics (HEP). Other important features of the proposed format are a separation of abstract data layouts from concrete programming implementations, self-description and random access. Data stored in PROMC files can be written, read and manipulated in a number of programming languages, such C++, JAVA, FORTRAN and PYTHON.
File formats commonly used in mass spectrometry proteomics.
Deutsch, Eric W
2012-12-01
The application of mass spectrometry (MS) to the analysis of proteomes has enabled the high-throughput identification and abundance measurement of hundreds to thousands of proteins per experiment. However, the formidable informatics challenge associated with analyzing MS data has required a wide variety of data file formats to encode the complex data types associated with MS workflows. These formats encompass the encoding of input instruction for instruments, output products of the instruments, and several levels of information and results used by and produced by the informatics analysis tools. A brief overview of the most common file formats in use today is presented here, along with a discussion of related topics.
Wrapping Python around MODFLOW/MT3DMS based groundwater models
NASA Astrophysics Data System (ADS)
Post, V.
2008-12-01
Numerical models that simulate groundwater flow and solute transport require a great amount of input data that is often organized into different files. A large proportion of the input data consists of spatially-distributed model parameters. The model output consists of a variety data such as heads, fluxes and concentrations. Typically all files have different formats. Consequently, preparing input and managing output is a complex and error-prone task. Proprietary software tools are available that facilitate the preparation of input files and analysis of model outcomes. The use of such software may be limited if it does not support all the features of the groundwater model or when the costs of such tools are prohibitive. Therefore a Python library was developed that contains routines to generate input files and process output files of MODFLOW/MT3DMS based models. The library is freely available and has an open structure so that the routines can be customized and linked into other scripts and libraries. The current set of functions supports the generation of input files for MODFLOW and MT3DMS, including the capability to read spatially-distributed input parameters (e.g. hydraulic conductivity) from PNG files. Both ASCII and binary output files can be read efficiently allowing for visualization of, for example, solute concentration patterns in contour plots with superimposed flow vectors using matplotlib. Series of contour plots are then easily saved as an animation. The subroutines can also be used within scripts to calculate derived quantities such as the mass of a solute within a particular region of the model domain. Using Python as a wrapper around groundwater models provides an efficient and flexible way of processing input and output data, which is not constrained by limitations of third-party products.
Carey, A.E.; Prudic, David E.
1996-01-01
Documentation is provided of model input and sample output used in a previous report for analysis of ground-water flow and simulated pumping scenarios in Paradise Valley, Humboldt County, Nevada.Documentation includes files containing input values and listings of sample output. The files, in American International Standard Code for Information Interchange (ASCII) or binary format, are compressed and put on a 3-1/2-inch diskette. The decompressed files require approximately 8.4 megabytes of disk space on an International Business Machine (IBM)- compatible microcomputer using the MicroSoft Disk Operating System (MS-DOS) operating system version 5.0 or greater.
Because HSPF requires extensive input data, its Data-Formatting Tool (HDFT) allows users to format that data and import it to a WDM file. HDFT aids urban watershed modeling applications that use sub-hourly temporal resolutions.
Addendum I, BIOPLUME III Graphics Conversion to SURFER Format
This procedure can be used to create a SURFER® compatible grid file from Bioplume III input and output graphics. The input data and results from Bioplume III can be contoured and printed directly from SURFER.
The Design and Usage of the New Data Management Features in NASTRAN
NASA Technical Reports Server (NTRS)
Pamidi, P. R.; Brown, W. K.
1984-01-01
Two new data management features are installed in the April 1984 release of NASTRAN. These two features are the Rigid Format Data Base and the READFILE capability. The Rigid Format Data Base is stored on external files in card image format and can be easily maintained and expanded by the use of standard text editors. This data base provides the user and the NASTRAN maintenance contractor with an easy means for making changes to a Rigid Format or for generating new Rigid Formats without unnecessary compilations and link editing of NASTRAN. Each Rigid Format entry in the data base contains the Direct Matrix Abstraction Program (DMAP), along with the associated restart, DMAP sequence subset and substructure control flags. The READFILE capability allows an user to reference an external secondary file from the NASTRAN primary input file and to read data from this secondary file. There is no limit to the number of external secondary files that may be referenced and read.
Java Library for Input and Output of Image Data and Metadata
NASA Technical Reports Server (NTRS)
Deen, Robert; Levoe, Steven
2003-01-01
A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.
Standard interface files and procedures for reactor physics codes, version III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, B.M.
Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)
File Formats Commonly Used in Mass Spectrometry Proteomics*
Deutsch, Eric W.
2012-01-01
The application of mass spectrometry (MS) to the analysis of proteomes has enabled the high-throughput identification and abundance measurement of hundreds to thousands of proteins per experiment. However, the formidable informatics challenge associated with analyzing MS data has required a wide variety of data file formats to encode the complex data types associated with MS workflows. These formats encompass the encoding of input instruction for instruments, output products of the instruments, and several levels of information and results used by and produced by the informatics analysis tools. A brief overview of the most common file formats in use today is presented here, along with a discussion of related topics. PMID:22956731
Merged analog and photon counting profiles used as input for other RLPROF VAPs
Newsom, Rob
2014-10-03
The rlprof_merge VAP "merges" the photon counting and analog signals appropriately for each channel, creating an output data file that is very similar to the original raw data file format that the Raman lidar initially had.
Merged analog and photon counting profiles used as input for other RLPROF VAPs
Newsom, Rob
1998-03-01
The rlprof_merge VAP "merges" the photon counting and analog signals appropriately for each channel, creating an output data file that is very similar to the original raw data file format that the Raman lidar initially had.
MISR Level 3 Radiance Versioning
Atmospheric Science Data Center
2016-11-04
... ESDT Product File Name Prefix Current Quality Designations MIL3DRD, MIL3MRD, MIL3QRD, and MIL3YRD ... Data Product Specification Rev K (PDF). Update to work with new format of the input PGE 1 files. F02_0007 ...
MOVES2014 for Experienced Users, September 2014 Webinar Slides
This webinar assumes a basic knowledge of past versions of the MOtor Vehicle Emission Simulator (MOVES) and includes a demonstration of the conversion of MOVES2010b input files to MOVES2014 format, changes to the MOVES GUI, and new input options.
Software Aids In Graphical Depiction Of Flow Data
NASA Technical Reports Server (NTRS)
Stegeman, J. D.
1995-01-01
Interactive Data Display System (IDDS) computer program is graphical-display program designed to assist in visualization of three-dimensional flow in turbomachinery. Grid and simulation data files in PLOT3D format required for input. Able to unwrap volumetric data cone associated with centrifugal compressor and display results in easy-to-understand two- or three-dimensional plots. IDDS provides majority of visualization and analysis capability for Integrated Computational Fluid Dynamics and Experiment (ICE) system. IDDS invoked from any subsystem, or used as stand-alone package of display software. Generates contour, vector, shaded, x-y, and carpet plots. Written in C language. Input file format used by IDDS is that of PLOT3D (COSMIC item ARC-12782).
Program Description: Financial Master File Processor-SWRL Financial System.
ERIC Educational Resources Information Center
Ideda, Masumi
Computer routines designed to produce various management and accounting reports required by the Southwest Regional Laboratory's (SWRL) Financial System are described. Input data requirements and output report formats are presented together with a discussion of the Financial Master File updating capabilities of the system. This document should be…
Mars Reconnaissance Orbiter Uplink Analysis Tool
NASA Technical Reports Server (NTRS)
Khanampompan, Teerapat; Gladden, Roy; Fisher, Forest; Hwang, Pauline
2008-01-01
This software analyzes Mars Reconnaissance Orbiter (MRO) orbital geometry with respect to Mars Exploration Rover (MER) contact windows, and is the first tool of its kind designed specifically to support MRO-MER interface coordination. Prior to this automated tool, this analysis was done manually with Excel and the UNIX command line. In total, the process would take approximately 30 minutes for each analysis. The current automated analysis takes less than 30 seconds. This tool resides on the flight machine and uses a PHP interface that does the entire analysis of the input files and takes into account one-way light time from another input file. Input flies are copied over to the proper directories and are dynamically read into the tool s interface. The user can then choose the corresponding input files based on the time frame desired for analysis. After submission of the Web form, the tool merges the two files into a single, time-ordered listing of events for both spacecraft. The times are converted to the same reference time (Earth Transmit Time) by reading in a light time file and performing the calculations necessary to shift the time formats. The program also has the ability to vary the size of the keep-out window on the main page of the analysis tool by inputting a custom time for padding each MRO event time. The parameters on the form are read in and passed to the second page for analysis. Everything is fully coded in PHP and can be accessed by anyone with access to the machine via Web page. This uplink tool will continue to be used for the duration of the MER mission's needs for X-band uplinks. Future missions also can use the tools to check overflight times as well as potential site observation times. Adaptation of the input files to the proper format, and the window keep-out times, would allow for other analyses. Any operations task that uses the idea of keep-out windows will have a use for this program.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
SnopViz, an interactive snow profile visualization tool
NASA Astrophysics Data System (ADS)
Fierz, Charles; Egger, Thomas; gerber, Matthias; Bavay, Mathias; Techel, Frank
2016-04-01
SnopViz is a visualization tool for both simulation outputs of the snow-cover model SNOWPACK and observed snow profiles. It has been designed to fulfil the needs of operational services (Swiss Avalanche Warning Service, Avalanche Canada) as well as offer the flexibility required to satisfy the specific needs of researchers. This JavaScript application runs on any modern browser and does not require an active Internet connection. The open source code is available for download from models.slf.ch where examples can also be run. Both the SnopViz library and the SnopViz User Interface will become a full replacement of the current research visualization tool SN_GUI for SNOWPACK. The SnopViz library is a stand-alone application that parses the provided input files, for example, a single snow profile (CAAML file format) or multiple snow profiles as output by SNOWPACK (PRO file format). A plugin architecture allows for handling JSON objects (JavaScript Object Notation) as well and plugins for other file formats may be added easily. The outputs are provided either as vector graphics (SVG) or JSON objects. The SnopViz User Interface (UI) is a browser based stand-alone interface. It runs in every modern browser, including IE, and allows user interaction with the graphs. SVG, the XML based standard for vector graphics, was chosen because of its easy interaction with JS and a good software support (Adobe Illustrator, Inkscape) to manipulate graphs outside SnopViz for publication purposes. SnopViz provides new visualization for SNOWPACK timeline output as well as time series input and output. The actual output format for SNOWPACK timelines was retained while time series are read from SMET files, a file format used in conjunction with the open source data handling code MeteoIO. Finally, SnopViz is able to render single snow profiles, either observed or modelled, that are provided as CAAML-file. This file format (caaml.org/Schemas/V5.0/Profiles/SnowProfileIACS) is an international standard to exchange snow profile data. It is supported by the International Association of Cryospheric Sciences (IACS) and was developed in collaboration with practitioners (Avalanche Canada).
NASA Technical Reports Server (NTRS)
Reichert, R, S.; Biringen, S.; Howard, J. E.
1999-01-01
LINER is a system of Fortran 77 codes which performs a 2D analysis of acoustic wave propagation and noise suppression in a rectangular channel with a continuous liner at the top wall. This new implementation is designed to streamline the usage of the several codes making up LINER, resulting in a useful design tool. Major input parameters are placed in two main data files, input.inc and nurn.prm. Output data appear in the form of ASCII files as well as a choice of GNUPLOT graphs. Section 2 briefly describes the physical model. Section 3 discusses the numerical methods; Section 4 gives a detailed account of program usage, including input formats and graphical options. A sample run is also provided. Finally, Section 5 briefly describes the individual program files.
MOVES2014 at the Project Level for Experienced Users, October 2014 Webinar Slides
This webinar covers the changes that enhance the MOtor Vehicle Emission Simulator at the project scale, changes to its graphical user interface at the project scale, how to convert a MOVES2010b project-level input file to MOVES2014 format, and new input.
Auto Draw from Excel Input Files
NASA Technical Reports Server (NTRS)
Strauss, Karl F.; Goullioud, Renaud; Cox, Brian; Grimes, James M.
2011-01-01
The design process often involves the use of Excel files during project development. To facilitate communications of the information in the Excel files, drawings are often generated. During the design process, the Excel files are updated often to reflect new input. The problem is that the drawings often lag the updates, often leading to confusion of the current state of the design. The use of this program allows visualization of complex data in a format that is more easily understandable than pages of numbers. Because the graphical output can be updated automatically, the manual labor of diagram drawing can be eliminated. The more frequent update of system diagrams can reduce confusion and reduce errors and is likely to uncover symmetric problems earlier in the design cycle, thus reducing rework and redesign.
SWIFT MODELLER: a Java based GUI for molecular modeling.
Mathur, Abhinav; Shankaracharya; Vidyarthi, Ambarish S
2011-10-01
MODELLER is command line argument based software which requires tedious formatting of inputs and writing of Python scripts which most people are not comfortable with. Also the visualization of output becomes cumbersome due to verbose files. This makes the whole software protocol very complex and requires extensive study of MODELLER manuals and tutorials. Here we describe SWIFT MODELLER, a GUI that automates formatting, scripting and data extraction processes and present it in an interactive way making MODELLER much easier to use than before. The screens in SWIFT MODELLER are designed keeping homology modeling in mind and their flow is a depiction of its steps. It eliminates the formatting of inputs, scripting processes and analysis of verbose output files through automation and makes pasting of the target sequence as the only prerequisite. Jmol (3D structure visualization tool) has been integrated into the GUI which opens and demonstrates the protein data bank files created by the MODELLER software. All files required and created by the software are saved in a folder named after the work instance's date and time of execution. SWIFT MODELLER lowers the skill level required for the software through automation of many of the steps in the original software protocol, thus saving an enormous amount of time per instance and making MODELLER very easy to work with.
UFO (UnFold Operator) default data format
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kissel, L.; Biggs, F.; Marking, T.R.
The default format for the storage of x,y data for use with the UFO code is described. The format assumes that the data stored in a file is a matrix of values; two columns of this matrix are selected to define a function of the form y = f(x). This format is specifically designed to allow for easy importation of data obtained from other sources, or easy entry of data using a text editor, with a minimum of reformatting. This format is flexible and extensible through the use of inline directives stored in the optional header of the file. Amore » special extension of the format implements encoded data which significantly reduces the storage required as compared wth the unencoded form. UFO supports several extensions to the file specification that implement execute-time operations, such as, transformation of the x and/or y values, selection of specific columns of the matrix for association with the x and y values, input of data directly from other formats (e.g., DAMP and PFF), and a simple type of library-structured file format. Several examples of the use of the format are given.« less
Organic geochemistry data of Alaska
complied by Threlkeld, Charles N.; Obuch, Raymond C.; Gunther, G.L.
2000-01-01
In order to archive the results of various petroleum geochemical analyses of the Alaska resource assessment, the USGS developed an Alaskan Organic Geochemical Data Base (AOGDB) in 1978 to house the data generated from USGS and subcontracted laboratories. Prior to the AOGDB, the accumulated data resided in a flat data file entitled 'PGS' that was maintained by Petroleum Information Corporation with technical input from the USGS. The information herein is a breakout of the master flat file format into a relational data base table format (akdata).
Ferraro Petrillo, Umberto; Roscigno, Gianluca; Cattaneo, Giuseppe; Giancarlo, Raffaele
2017-05-15
MapReduce Hadoop bioinformatics applications require the availability of special-purpose routines to manage the input of sequence files. Unfortunately, the Hadoop framework does not provide any built-in support for the most popular sequence file formats like FASTA or BAM. Moreover, the development of these routines is not easy, both because of the diversity of these formats and the need for managing efficiently sequence datasets that may count up to billions of characters. We present FASTdoop, a generic Hadoop library for the management of FASTA and FASTQ files. We show that, with respect to analogous input management routines that have appeared in the Literature, it offers versatility and efficiency. That is, it can handle collections of reads, with or without quality scores, as well as long genomic sequences while the existing routines concentrate mainly on NGS sequence data. Moreover, in the domain where a comparison is possible, the routines proposed here are faster than the available ones. In conclusion, FASTdoop is a much needed addition to Hadoop-BAM. The software and the datasets are available at http://www.di.unisa.it/FASTdoop/ . umberto.ferraro@uniroma1.it. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
FAST: Fitting and Assessment of Synthetic Templates
NASA Astrophysics Data System (ADS)
Kriek, Mariska; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; Illingworth, Garth D.; Marchesini, Danilo; Quadri, Ryan F.; Aird, James; Coil, Alison L.; Georgakakis, Antonis
2018-03-01
FAST (Fitting and Assessment of Synthetic Templates) fits stellar population synthesis templates to broadband photometry and/or spectra. FAST is compatible with the photometric redshift code EAzY (ascl:1010.052) when fitting broadband photometry; it uses the photometric redshifts derived by EAzY, and the input files (for examply, photometric catalog and master filter file) are the same. FAST fits spectra in combination with broadband photometric data points or simultaneously fits two components, allowing for an AGN contribution in addition to the host galaxy light. Depending on the input parameters, FAST outputs the best-fit redshift, age, dust content, star formation timescale, metallicity, stellar mass, star formation rate (SFR), and their confidence intervals. Though some of FAST's functions overlap with those of HYPERZ (ascl:1108.010), it differs by fitting fluxes instead of magnitudes, allows the user to completely define the grid of input stellar population parameters and easily input photometric redshifts and their confidence intervals, and calculates calibrated confidence intervals for all parameters. Note that FAST is not a photometric redshift code, though it can be used as one.
NASA Astrophysics Data System (ADS)
Ghiringhelli, Luca M.; Carbogno, Christian; Levchenko, Sergey; Mohamed, Fawzi; Huhs, Georg; Lüders, Martin; Oliveira, Micael; Scheffler, Matthias
2017-11-01
With big-data driven materials research, the new paradigm of materials science, sharing and wide accessibility of data are becoming crucial aspects. Obviously, a prerequisite for data exchange and big-data analytics is standardization, which means using consistent and unique conventions for, e.g., units, zero base lines, and file formats. There are two main strategies to achieve this goal. One accepts the heterogeneous nature of the community, which comprises scientists from physics, chemistry, bio-physics, and materials science, by complying with the diverse ecosystem of computer codes and thus develops "converters" for the input and output files of all important codes. These converters then translate the data of each code into a standardized, code-independent format. The other strategy is to provide standardized open libraries that code developers can adopt for shaping their inputs, outputs, and restart files, directly into the same code-independent format. In this perspective paper, we present both strategies and argue that they can and should be regarded as complementary, if not even synergetic. The represented appropriate format and conventions were agreed upon by two teams, the Electronic Structure Library (ESL) of the European Center for Atomic and Molecular Computations (CECAM) and the NOvel MAterials Discovery (NOMAD) Laboratory, a European Centre of Excellence (CoE). A key element of this work is the definition of hierarchical metadata describing state-of-the-art electronic-structure calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BERG, MICHAEL; RILEY, MARSHALL
System assessments typically yield large quantities of data from disparate sources for an analyst to scrutinize for issues. Netmeld is used to parse input from different file formats, store the data in a common format, allow users to easily query it, and enable analysts to tie different analysis tools together using a common back-end.
Computer program documentation: CYBER to Univac binary conversion user's guide
NASA Technical Reports Server (NTRS)
Martin, E. W.
1980-01-01
A user's guide for a computer program which will convert SINDA temperature history data from CDC (Cyber) binary format to UNIVAC 1100 binary format is presented. The various options available, the required input, the optional output, file assignments, and the restrictions of the program are discussed.
Development of Software to Model AXAF-I Image Quality
NASA Technical Reports Server (NTRS)
Geary, Joseph; Hawkins, Lamar; Ahmad, Anees; Gong, Qian
1997-01-01
This report describes work conducted on Delivery Order 181 between October 1996 through June 1997. During this period software was written to: compute axial PSD's from RDOS AXAF-I mirror surface maps; plot axial surface errors and compute PSD's from HDOS "Big 8" axial scans; plot PSD's from FITS format PSD files; plot band-limited RMS vs axial and azimuthal position for multiple PSD files; combine and organize PSD's from multiple mirror surface measurements formatted as input to GRAZTRACE; modify GRAZTRACE to read FITS formatted PSD files; evaluate AXAF-I test results; improve and expand the capabilities of the GT x-ray mirror analysis package. During this period work began on a more user-friendly manual for the GT program, and improvements were made to the on-line help manual.
Deep PDF parsing to extract features for detecting embedded malware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munson, Miles Arthur; Cross, Jesse S.
2011-09-01
The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benignmore » and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout the rest of this report. The features are extracted using an instrumented PDF viewer, and are the inputs to a prediction model that scores the likelihood of a PDF file containing malware. The prediction model is constructed from a sample of labeled data by a machine learning algorithm (specifically, decision tree ensemble learning). Preliminary experiments show that the model is able to detect half of the PDF malware in the corpus with zero false alarms. We conclude the report with suggestions for extending this work to detect a greater variety of PDF malware.« less
Defining Geodetic Reference Frame using Matlab®: PlatEMotion 2.0
NASA Astrophysics Data System (ADS)
Cannavò, Flavio; Palano, Mimmo
2016-03-01
We describe the main features of the developed software tool, namely PlatE-Motion 2.0 (PEM2), which allows inferring the Euler pole parameters by inverting the observed velocities at a set of sites located on a rigid block (inverse problem). PEM2 allows also calculating the expected velocity value for any point located on the Earth providing an Euler pole (direct problem). PEM2 is the updated version of a previous software tool initially developed for easy-to-use file exchange with the GAMIT/GLOBK software package. The software tool is developed in Matlab® framework and, as the previous version, includes a set of MATLAB functions (m-files), GUIs (fig-files), map data files (mat-files) and user's manual as well as some example input files. New changes in PEM2 include (1) some bugs fixed, (2) improvements in the code, (3) improvements in statistical analysis, (4) new input/output file formats. In addition, PEM2 can be now run under the majority of operating systems. The tool is open source and freely available for the scientific community.
Bradley, D. Nathan
2013-01-01
The peak discharge of a flood can be estimated from the elevation of high-water marks near the inlet and outlet of a culvert after the flood has occurred. This type of discharge estimate is called an “indirect measurement” because it relies on evidence left behind by the flood, such as high-water marks on trees or buildings. When combined with the cross-sectional geometry of the channel upstream from the culvert and the culvert size, shape, roughness, and orientation, the high-water marks define a water-surface profile that can be used to estimate the peak discharge by using the methods described by Bodhaine (1968). This type of measurement is in contrast to a “direct” measurement of discharge made during the flood where cross-sectional area is measured and a current meter or acoustic equipment is used to measure the water velocity. When a direct discharge measurement cannot be made at a streamgage during high flows because of logistics or safety reasons, an indirect measurement of a peak discharge is useful for defining the high-flow section of the stage-discharge relation (rating curve) at the streamgage, resulting in more accurate computation of high flows. The Culvert Analysis Program (CAP) (Fulford, 1998) is a command-line program written in Fortran for computing peak discharges and culvert rating surfaces or curves. CAP reads input data from a formatted text file and prints results to another formatted text file. Preparing and correctly formatting the input file may be time-consuming and prone to errors. This document describes the CAP graphical user interface (GUI)—a modern, cross-platform, menu-driven application that prepares the CAP input file, executes the program, and helps the user interpret the output
Abeyta, Cynthia G.; Frenzel, Peter F.
1999-01-01
This report contains listings of model input and output files for the simulation of the time of arrival of landfill leachate at the water table from the Municipal Solid Waste Landfill Facility (MSWLF), about 10 miles northeast of downtown El Paso, Texas. This simulation was done by the U.S. Geological Survey in cooperation with the U.S. Department of the Army, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso, Texas. The U.S. Environmental Protection Agency-developed Hydrologic Evaluation of Landfill Performance (HELP) and Multimedia Exposure Assessment (MULTIMED) computer models were used to simulate the production of leachate by a landfill and transport of landfill leachate to the water table. Model input data files used with and output files generated by the HELP and MULTIMED models are provided in ASCII format on a 3.5-inch 1.44-megabyte IBM-PC compatible floppy disk.
Trick Simulation Environment 07
NASA Technical Reports Server (NTRS)
Lin, Alexander S.; Penn, John M.
2012-01-01
The Trick Simulation Environment is a generic simulation toolkit used for constructing and running simulations. This release includes a Monte Carlo analysis simulation framework and a data analysis package. It produces all auto documentation in XML. Also, the software is capable of inserting a malfunction at any point during the simulation. Trick 07 adds variable server output options and error messaging and is capable of using and manipulating wide characters for international support. Wide character strings are available as a fundamental type for variables processed by Trick. A Trick Monte Carlo simulation uses a statistically generated, or predetermined, set of inputs to iteratively drive the simulation. Also, there is a framework in place for optimization and solution finding where developers may iteratively modify the inputs per run based on some analysis of the outputs. The data analysis package is capable of reading data from external simulation packages such as MATLAB and Octave, as well as the common comma-separated values (CSV) format used by Excel, without the use of external converters. The file formats for MATLAB and Octave were obtained from their documentation sets, and Trick maintains generic file readers for each format. XML tags store the fields in the Trick header comments. For header files, XML tags for structures and enumerations, and the members within are stored in the auto documentation. For source code files, XML tags for each function and the calling arguments are stored in the auto documentation. When a simulation is built, a top level XML file, which includes all of the header and source code XML auto documentation files, is created in the simulation directory. Trick 07 provides an XML to TeX converter. The converter reads in header and source code XML documentation files and converts the data to TeX labels and tables suitable for inclusion in TeX documents. A malfunction insertion capability allows users to override the value of any simulation variable, or call a malfunction job, at any time during the simulation. Users may specify conditions, use the return value of a malfunction trigger job, or manually activate a malfunction. The malfunction action may consist of executing a block of input file statements in an action block, setting simulation variable values, call a malfunction job, or turn on/off simulation jobs.
Attitude profile design program
NASA Technical Reports Server (NTRS)
1991-01-01
The Attitude Profile Design (APD) Program was designed to be used as a stand-alone addition to the Simplex Computation of Optimum Orbital Trajectories (SCOOT). The program uses information from a SCOOT output file and the user defined attitude profile to produce time histories of attitude, angular body rates, and accelerations. The APD program is written in standard FORTRAN77 and should be portable to any machine that has an appropriate compiler. The input and output are through formatted files. The program reads the basic flight data, such as the states of the vehicles, acceleration profiles, and burn information, from the SCOOT output file. The user inputs information about the desired attitude profile during coasts in a high level manner. The program then takes these high level commands and executes the maneuvers, outputting the desired information.
NASA Astrophysics Data System (ADS)
Prasad, U.; Rahabi, A.
2001-05-01
The following utilities developed for HDF-EOS format data dump are of special use for Earth science data for NASA's Earth Observation System (EOS). This poster demonstrates their use and application. The first four tools take HDF-EOS data files as input. HDF-EOS Metadata Dumper - metadmp Metadata dumper extracts metadata from EOS data granules. It operates by simply copying blocks of metadata from the file to the standard output. It does not process the metadata in any way. Since all metadata in EOS granules is encoded in the Object Description Language (ODL), the output of metadmp will be in the form of complete ODL statements. EOS data granules may contain up to three different sets of metadata (Core, Archive, and Structural Metadata). HDF-EOS Contents Dumper - heosls Heosls dumper displays the contents of HDF-EOS files. This utility provides detailed information on the POINT, SWATH, and GRID data sets. in the files. For example: it will list, the Geo-location fields, Data fields and objects. HDF-EOS ASCII Dumper - asciidmp The ASCII dump utility extracts fields from EOS data granules into plain ASCII text. The output from asciidmp should be easily human readable. With minor editing, asciidmp's output can be made ingestible by any application with ASCII import capabilities. HDF-EOS Binary Dumper - bindmp The binary dumper utility dumps HDF-EOS objects in binary format. This is useful for feeding the output of it into existing program, which does not understand HDF, for example: custom software and COTS products. HDF-EOS User Friendly Metadata - UFM The UFM utility tool is useful for viewing ECS metadata. UFM takes an EOSDIS ODL metadata file and produces an HTML report of the metadata for display using a web browser. HDF-EOS METCHECK - METCHECK METCHECK can be invoked from either Unix or Dos environment with a set of command line options that a user might use to direct the tool inputs and output . METCHECK validates the inventory metadata in (.met file) using The Descriptor file (.desc) as the reference. The tool takes (.desc), and (.met) an ODL file as inputs, and generates a simple output file contains the results of the checking process.
NEMAR plotting computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program which generates CalComp plots of trajectory parameters is examined. The trajectory parameters are calculated and placed on a data file by the Near Earth Mission Analysis Routine computer program. The plot program accesses the data file and generates the plots as defined by inputs to the plot program. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included. Although this plot program utilizes a random access data file, a data file of the same type and formatted in 102 numbers per record could be generated by any computer program and used by this plot program.
IOS: PDP 11/45 formatted input/output task stacker and processer. [In MACRO-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koschik, J.
1974-07-08
IOS allows the programer to perform formated Input/Output at assembly language level to/from any peripheral device. It runs under DOS versions V8-O8 or V9-19, reading and writing DOS-compatible files. Additionally, IOS will run, with total transparency, in an environment with memory management enabled. Minimum hardware required is a 16K PDP 11/45, Keyboard Device, DISK (DK,DF, or DC), and Line Frequency Clock. The source language is MACRO-11 (3.3K Decimal Words).
1978-09-01
input source language (other than ?S). Used in double form, negates an FFT spcification for a subroutine. / (Slash) Used to separate numeric digits (V-l...1) represents the digits 1-999 and also digits followed by a letter, e.g., LINE1OA ’. Th, E311owing name prefiKes are not allowed: PSSQ, VSEr, VSZ...b ZERO OUT 6 LA 6,12 (6) ADD 12 TO 6 PUT IN REG b 4’MOVF INPUT DATE rC WORK APEA, REFORMAT DD AND YY 0C)NVERr TWO DIGIT MONTH TO SYMBOLIC THREE
SutraPrep, a pre-processor for SUTRA, a model for ground-water flow with solute or energy transport
Provost, Alden M.
2002-01-01
SutraPrep facilitates the creation of three-dimensional (3D) input datasets for the USGS ground-water flow and transport model SUTRA Version 2D3D.1. It is most useful for applications in which the geometry of the 3D model domain and the spatial distribution of physical properties and boundary conditions is relatively simple. SutraPrep can be used to create a SUTRA main input (?.inp?) file, an initial conditions (?.ics?) file, and a 3D plot of the finite-element mesh in Virtual Reality Modeling Language (VRML) format. Input and output are text-based. The code can be run on any platform that has a standard FORTRAN-90 compiler. Executable code is available for Microsoft Windows.
Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.
Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory
2016-06-13
Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are first used with a bioinformatics system. Simplifying the validation of essential tabular data files, such as sample metadata, will reduce common errors and thereby improve the quality and reliability of research outcomes.
Bradley, D. Nathan
2012-01-01
The slope-area method is a technique for estimating the peak discharge of a flood after the water has receded (Dalrymple and Benson, 1967). This type of discharge estimate is called an “indirect measurement” because it relies on evidence left behind by the flood, such as high-water marks (HWMs) on trees or buildings. These indicators of flood stage are combined with measurements of the cross-sectional geometry of the stream, estimates of channel roughness, and a mathematical model that balances the total energy of the flow between cross sections. This is in contrast to a “direct” measurement of discharge during the flood where cross-sectional area is measured and a current meter or acoustic equipment is used to measure the water velocity. When a direct discharge measurement cannot be made at a gage during high flows because of logistics or safety reasons, an indirect measurement of a peak discharge is useful for defining the high-flow section of the stage-discharge relation (rating curve) at the stream gage, resulting in more accurate computation of high flows. The Slope-Area Computation program (SAC; Fulford, 1994) is an implementation of the slope-area method that computes a peak-discharge estimate from inputs of water-surface slope (from surveyed HWMs), channel geometry, and estimated channel roughness. SAC is a command line program written in Fortran that reads input data from a formatted text file and prints results to another formatted text file. Preparing the input file can be time-consuming and prone to errors. This document describes the SAC graphical user interface (GUI), a crossplatform “wrapper” application that prepares the SAC input file, executes the program, and helps the user interpret the output. The SAC GUI is an update and enhancement of the slope-area method (SAM; Hortness, 2004; Berenbrock, 1996), an earlier spreadsheet tool used to aid field personnel in the completion of a slope-area measurement. The SAC GUI reads survey data, develops a plan-view plot, water-surface profile, cross-section plots, and develops the SAC input file. The SAC GUI also develops HEC-2 files that can be imported into HEC–RAS.
User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs
Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.
2008-01-01
This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.
mzML2ISA & nmrML2ISA: generating enriched ISA-Tab metadata files from metabolomics XML data
Larralde, Martin; Lawson, Thomas N.; Weber, Ralf J. M.; Moreno, Pablo; Haug, Kenneth; Rocca-Serra, Philippe; Viant, Mark R.; Steinbeck, Christoph; Salek, Reza M.
2017-01-01
Abstract Summary Submission to the MetaboLights repository for metabolomics data currently places the burden of reporting instrument and acquisition parameters in ISA-Tab format on users, who have to do it manually, a process that is time consuming and prone to user input error. Since the large majority of these parameters are embedded in instrument raw data files, an opportunity exists to capture this metadata more accurately. Here we report a set of Python packages that can automatically generate ISA-Tab metadata file stubs from raw XML metabolomics data files. The parsing packages are separated into mzML2ISA (encompassing mzML and imzML formats) and nmrML2ISA (nmrML format only). Overall, the use of mzML2ISA & nmrML2ISA reduces the time needed to capture metadata substantially (capturing 90% of metadata on assay and sample levels), is much less prone to user input errors, improves compliance with minimum information reporting guidelines and facilitates more finely grained data exploration and querying of datasets. Availability and Implementation mzML2ISA & nmrML2ISA are available under version 3 of the GNU General Public Licence at https://github.com/ISA-tools. Documentation is available from http://2isa.readthedocs.io/en/latest/. Contact reza.salek@ebi.ac.uk or isatools@googlegroups.com Supplementary information Supplementary data are available at Bioinformatics online. PMID:28402395
mzML2ISA & nmrML2ISA: generating enriched ISA-Tab metadata files from metabolomics XML data.
Larralde, Martin; Lawson, Thomas N; Weber, Ralf J M; Moreno, Pablo; Haug, Kenneth; Rocca-Serra, Philippe; Viant, Mark R; Steinbeck, Christoph; Salek, Reza M
2017-08-15
Submission to the MetaboLights repository for metabolomics data currently places the burden of reporting instrument and acquisition parameters in ISA-Tab format on users, who have to do it manually, a process that is time consuming and prone to user input error. Since the large majority of these parameters are embedded in instrument raw data files, an opportunity exists to capture this metadata more accurately. Here we report a set of Python packages that can automatically generate ISA-Tab metadata file stubs from raw XML metabolomics data files. The parsing packages are separated into mzML2ISA (encompassing mzML and imzML formats) and nmrML2ISA (nmrML format only). Overall, the use of mzML2ISA & nmrML2ISA reduces the time needed to capture metadata substantially (capturing 90% of metadata on assay and sample levels), is much less prone to user input errors, improves compliance with minimum information reporting guidelines and facilitates more finely grained data exploration and querying of datasets. mzML2ISA & nmrML2ISA are available under version 3 of the GNU General Public Licence at https://github.com/ISA-tools. Documentation is available from http://2isa.readthedocs.io/en/latest/. reza.salek@ebi.ac.uk or isatools@googlegroups.com. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
CFEST Coupled Flow, Energy & Solute Transport Version CFEST005 User’s Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Chen, Yousu; Gilca, Alex
2006-07-20
The CFEST (Coupled Flow, Energy, and Solute Transport) simulator described in this User’s Guide is a three-dimensional finite-element model used to evaluate groundwater flow and solute mass transport. Confined and unconfined aquifer systems, as well as constant and variable density fluid flows can be represented with CFEST. For unconfined aquifers, the model uses a moving boundary for the water table, deforming the numerical mesh so that the uppermost nodes are always at the water table. For solute transport, changes in concentra¬tion of a single dissolved chemical constituent are computed for advective and hydrodynamic transport, linear sorption represented by a retardationmore » factor, and radioactive decay. Although several thermal parameters described in this User’s Guide are required inputs, thermal transport has not yet been fully implemented in the simulator. Once fully implemented, transport of thermal energy in the groundwater and solid matrix of the aquifer can also be used to model aquifer thermal regimes. The CFEST simulator is written in the FORTRAN 77 language, following American National Standards Institute (ANSI) standards. Execution of the CFEST simulator is controlled through three required text input files. These input file use a structured format of associated groups of input data. Example input data lines are presented for each file type, as well as a description of the structured FORTRAN data format. Detailed descriptions of all input requirements, output options, and program structure and execution are provided in this User’s Guide. Required inputs for auxillary CFEST utilities that aide in post-processing data are also described. Global variables are defined for those with access to the source code. Although CFEST is a proprietary code (CFEST, Inc., Irvine, CA), the Pacific Northwest National Laboratory retains permission to maintain its own source, and to distribute executables to Hanford subcontractors.« less
Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blankenship, Doug; Sonnenthal, Eric
Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.
NetpathXL - An Excel Interface to the Program NETPATH
Parkhurst, David L.; Charlton, Scott R.
2008-01-01
NetpathXL is a revised version of NETPATH that runs under Windows? operating systems. NETPATH is a computer program that uses inverse geochemical modeling techniques to calculate net geochemical reactions that can account for changes in water composition between initial and final evolutionary waters in hydrologic systems. The inverse models also can account for the isotopic composition of waters and can be used to estimate radiocarbon ages of dissolved carbon in ground water. NETPATH relies on an auxiliary, database program, DB, to enter the chemical analyses and to perform speciation calculations that define total concentrations of elements, charge balance, and redox state of aqueous solutions that are then used in inverse modeling. Instead of DB, NetpathXL relies on Microsoft Excel? to enter the chemical analyses. The speciation calculation formerly included in DB is implemented within the program NetpathXL. A program DBXL can be used to translate files from the old DB format (.lon files) to NetpathXL spreadsheets, or to create new NetpathXL spreadsheets. Once users have a NetpathXL spreadsheet with the proper format, new spreadsheets can be generated by copying or saving NetpathXL spreadsheets. In addition, DBXL can convert NetpathXL spreadsheets to PHREEQC input files. New capabilities in PHREEQC (version 2.15) allow solution compositions to be written to a .lon file, and inverse models developed in PHREEQC to be written as NetpathXL .pat and model files. NetpathXL can open NetpathXL spreadsheets, NETPATH-format path files (.pat files), and NetpathXL-format path files (.pat files). Once the speciation calculations have been performed on a spreadsheet file or a .pat file has been opened, the NetpathXL calculation engine is identical to the original NETPATH. Development of models and viewing results in NetpathXL rely on keyboard entry as in NETPATH.
NASA Technical Reports Server (NTRS)
Vos, R. G.; Straayer, J. W.
1975-01-01
Modifications and additions incorporated into the BOPACE 3-D program are described. Updates to the program input data formats, error messages, file usage, size limitations, and overlay schematic are included.
Tolerance and UQ4SIM: Nimble Uncertainty Documentation and Analysis Software
NASA Technical Reports Server (NTRS)
Kleb, Bil
2008-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and variabilities is a necessary first step toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. The basic premise of uncertainty markup is to craft a tolerance and tagging mini-language that offers a natural, unobtrusive presentation and does not depend on parsing each type of input file format. Each file is marked up with tolerances and optionally, associated tags that serve to label the parameters and their uncertainties. The evolution of such a language, often called a Domain Specific Language or DSL, is given in [1], but in final form it parallels tolerances specified on an engineering drawing, e.g., 1 +/- 0.5, 5 +/- 10%, 2 +/- 10 where % signifies percent and o signifies order of magnitude. Tags, necessary for error propagation, can be added by placing a quotation-mark-delimited tag after the tolerance, e.g., 0.7 +/- 20% 'T_effective'. In addition, tolerances might have different underlying distributions, e.g., Uniform, Normal, or Triangular, or the tolerances may merely be intervals due to lack of knowledge (uncertainty). Finally, to address pragmatic considerations such as older models that require specific number-field formats, C-style format specifiers can be appended to the tolerance like so, 1.35 +/- 10U_3.2f. As an example of use, consider figure 1, where a chemical reaction input file is has been marked up to include tolerances and tags per table 1. Not only does the technique provide a natural method of specifying tolerances, but it also servers as in situ documentation of model uncertainties. This tolerance language comes with a utility to strip the tolerances (and tags), to provide a path to the nominal model parameter file. And, as shown in [1], having the ability to quickly mark and identify model parameter uncertainties facilitates error propagation, which in turn yield output uncertainties.
Profex: a graphical user interface for the Rietveld refinement program BGMN.
Doebelin, Nicola; Kleeberg, Reinhard
2015-10-01
Profex is a graphical user interface for the Rietveld refinement program BGMN . Its interface focuses on preserving BGMN 's powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems.
Profex: a graphical user interface for the Rietveld refinement program BGMN
Doebelin, Nicola; Kleeberg, Reinhard
2015-01-01
Profex is a graphical user interface for the Rietveld refinement program BGMN. Its interface focuses on preserving BGMN’s powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems. PMID:26500466
A Digital Control Algorithm for Magnetic Suspension Systems
NASA Technical Reports Server (NTRS)
Britton, Thomas C.
1996-01-01
An ongoing program exists to investigate and develop magnetic suspension technologies and modelling techniques at NASA Langley Research Center. Presently, there is a laboratory-scale large air-gap suspension system capable of five degree-of-freedom (DOF) control that is operational and a six DOF system that is under development. Those systems levitate a cylindrical element containing a permanent magnet core above a planar array of electromagnets, which are used for levitation and control purposes. In order to evaluate various control approaches with those systems, the Generic Real-Time State-Space Controller (GRTSSC) software package was developed. That control software package allows the user to implement multiple control methods and allows for varied input/output commands. The development of the control algorithm is presented. The desired functionality of the software is discussed, including the ability to inject noise on sensor inputs and/or actuator outputs. Various limitations, common issues, and trade-offs are discussed including data format precision; the drawbacks of using either Direct Memory Access (DMA), interrupts, or program control techniques for data acquisition; and platform dependent concerns related to the portability of the software, such as memory addressing formats. Efforts to minimize overall controller loop-rate and a comparison of achievable controller sample rates are discussed. The implementation of a modular code structure is presented. The format for the controller input data file and the noise information file is presented. Controller input vector information is available for post-processing by mathematical analysis software such as MATLAB1.
Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun
2015-01-01
It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following design patterns. http://ontorat.hegroup.org/.
NASA Astrophysics Data System (ADS)
Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.
2017-12-01
Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.
User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes
Klein, Fred W.
2002-01-01
Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of Hypoinverse includes all of the previous capabilities, but adds Y2000 formats to those defined earlier. In most cases, the new formats add 2 digits to the year field to accommodate the century. Other fields are sometimes rearranged or expanded to accommodate a better field order. The Y2000 formats are invoked with the “200” command. When the Y2000 flag is turned on, all files are read and written in the new format and there is no mixing of format types in a single run. Some formats without a date field, like station files, have not changed. A separate program called 2000CONV has been written to convert old formats to new. Other new features, like expanded station names, calculating amplitude magnitudes from a variety of digital seismometers, station history files, interactive earthquake processing, and locations from CUSP (Caltech USGS Seismic Processing) binary files have been added. General features. Hypoinverse will locate any number of events in an input file, which can be in one of several different formats. Any or all of printout, summary or archive output may be produced. Hypoinverse is driven by user commands. The various commands define input and output files, set adjustable parameters, and solve for locations of a file of earthquake data using the parameters and files currently set. It is both interactive and "batch" in that commands may be executed either from the keyboard or from a file. You execute the commands in a file by typing @filename at the Hypoinverse prompt. Users may either supply parameters on the command line, or omit them and are prompted interactively. The current parameter values are displayed and may be taken as defaults by pressing just the RETURN key after the prompt. This makes the program very easy to use, providing you can remember the names of the commands. Combining commands with and without their required parameters into a command file permits a variety of customized procedures such as automatic input of crustal model and station data, but prompting for a different phase file each time. All commands are 3 letters long and most require one or more parameters or file names. If they appear on a line with a command, character strings such as filenames must be enclosed in apostrophes (single quotes). Appendix 1 gives this and other free-format rules for supplying parameters, which are parsed in Fortran. When several parameters are required following a command, any of them may be omitted by replacing them with null fields (see appendix 1). A null field leaves that parameter unchanged from its current or default value. When you start HYPOINVERSE, default values are in effect for all parameters except file names. Hypoinverse is a complicated program with many features and options. Many of these "advanced" or seldom used features are documented here, but are more detailed than a typical user needs to read about when first starting with the program. I have put some of this material in smaller type so that a first time user can concentrate on the more important information.
A Review of Aeromagnetic Anomalies in the Sawatch Range, Central Colorado
Bankey, Viki
2010-01-01
This report contains digital data and image files of aeromagnetic anomalies in the Sawatch Range of central Colorado. The primary product is a data layer of polygons with linked data records that summarize previous interpretations of aeromagnetic anomalies in this region. None of these data files and images are new; rather, they are presented in updated formats that are intended to be used as input to geographic information systems, standard graphics software, or map-plotting packages.
Application Program Interface for the Orion Aerodynamics Database
NASA Technical Reports Server (NTRS)
Robinson, Philip E.; Thompson, James
2013-01-01
The Application Programming Interface (API) for the Crew Exploration Vehicle (CEV) Aerodynamic Database has been developed to provide the developers of software an easily implemented, fully self-contained method of accessing the CEV Aerodynamic Database for use in their analysis and simulation tools. The API is programmed in C and provides a series of functions to interact with the database, such as initialization, selecting various options, and calculating the aerodynamic data. No special functions (file read/write, table lookup) are required on the host system other than those included with a standard ANSI C installation. It reads one or more files of aero data tables. Previous releases of aerodynamic databases for space vehicles have only included data tables and a document of the algorithm and equations to combine them for the total aerodynamic forces and moments. This process required each software tool to have a unique implementation of the database code. Errors or omissions in the documentation, or errors in the implementation, led to a lengthy and burdensome process of having to debug each instance of the code. Additionally, input file formats differ for each space vehicle simulation tool, requiring the aero database tables to be reformatted to meet the tool s input file structure requirements. Finally, the capabilities for built-in table lookup routines vary for each simulation tool. Implementation of a new database may require an update to and verification of the table lookup routines. This may be required if the number of dimensions of a data table exceeds the capability of the simulation tools built-in lookup routines. A single software solution was created to provide an aerodynamics software model that could be integrated into other simulation and analysis tools. The highly complex Orion aerodynamics model can then be quickly included in a wide variety of tools. The API code is written in ANSI C for ease of portability to a wide variety of systems. The input data files are in standard formatted ASCII, also for improved portability. The API contains its own implementation of multidimensional table reading and lookup routines. The same aerodynamics input file can be used without modification on all implementations. The turnaround time from aerodynamics model release to a working implementation is significantly reduced
QX MAN: Q and X file manipulation
NASA Technical Reports Server (NTRS)
Krein, Mark A.
1992-01-01
QX MAN is a grid and solution file manipulation program written primarily for the PARC code and the GRIDGEN family of grid generation codes. QX MAN combines many of the features frequently encountered in grid generation, grid refinement, the setting-up of initial conditions, and post processing. QX MAN allows the user to manipulate single block and multi-block grids (and their accompanying solution files) by splitting, concatenating, rotating, translating, re-scaling, and stripping or adding points. In addition, QX MAN can be used to generate an initial solution file for the PARC code. The code was written to provide several formats for input and output in order for it to be useful in a broad spectrum of applications.
iPat: intelligent prediction and association tool for genomic research.
Chen, Chunpeng James; Zhang, Zhiwu
2018-06-01
The ultimate goal of genomic research is to effectively predict phenotypes from genotypes so that medical management can improve human health and molecular breeding can increase agricultural production. Genomic prediction or selection (GS) plays a complementary role to genome-wide association studies (GWAS), which is the primary method to identify genes underlying phenotypes. Unfortunately, most computing tools cannot perform data analyses for both GWAS and GS. Furthermore, the majority of these tools are executed through a command-line interface (CLI), which requires programming skills. Non-programmers struggle to use them efficiently because of the steep learning curves and zero tolerance for data formats and mistakes when inputting keywords and parameters. To address these problems, this study developed a software package, named the Intelligent Prediction and Association Tool (iPat), with a user-friendly graphical user interface. With iPat, GWAS or GS can be performed using a pointing device to simply drag and/or click on graphical elements to specify input data files, choose input parameters and select analytical models. Models available to users include those implemented in third party CLI packages such as GAPIT, PLINK, FarmCPU, BLINK, rrBLUP and BGLR. Users can choose any data format and conduct analyses with any of these packages. File conversions are automatically conducted for specified input data and selected packages. A GWAS-assisted genomic prediction method was implemented to perform genomic prediction using any GWAS method such as FarmCPU. iPat was written in Java for adaptation to multiple operating systems including Windows, Mac and Linux. The iPat executable file, user manual, tutorials and example datasets are freely available at http://zzlab.net/iPat. zhiwu.zhang@wsu.edu.
FMC: a one-liner Python program to manage, classify and plot focal mechanisms
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, José A.
2014-05-01
The analysis of earthquake focal mechanisms (or Seismic Moment Tensor, SMT) is a key tool on seismotectonics research. Each focal mechanism is characterized by several location parameters of the earthquake hypocenter, the earthquake size (magnitude and scalar moment tensor) and some geometrical characteristics of the rupture (nodal planes orientations, SMT components and/or SMT main axes orientations). The aim of FMC is to provide a simple but powerful tool to manage focal mechanism data. The data should be input to the program formatted as one of two of the focal mechanisms formatting options of the GMT (Generic Mapping Tools) package (Wessel and Smith, 1998): the Harvard CMT convention and the single nodal plane Aki and Richards (1980) convention. The former is a SMT format that can be downloaded directly from the Global CMT site (http://www.globalcmt.org/), while the later is the simplest way to describe earthquake rupture data. FMC is programmed in Python language, which is distributed as Open Source GPL-compatible, and therefore can be used to develop Free Software. Python runs on almost any machine, and has a wide support and presence in any operative system. The program has been conceived with the modularity and versatility of the classical UNIX-like tools. Is called from the command line and can be easily integrated into shell scripts (*NIX systems) or batch files (DOS/Windows systems). The program input and outputs can be done by means of ASCII files or using standard input (or redirection "<"), standard output (screen or redirection ">") and pipes ("|"). By default FMC will read the input and write the output as a Harvard CMT (psmeca formatted) ASCII file, although other formats can be used. Optionally FMC will produce a classification diagram representing the rupture type of the focal mechanisms processed. In order to count with a detailed classification of the focal mechanisms I decided to classify the focal mechanism in a series of fields that include the oblique slip regimes. This approximation is similar to the Johnston et al. (1994) classification; with 7 classes of earthquakes: 1) Normal; 2) Normal - Strike-slip; 3) Strike-slip - Normal; 4) Strike-slip; 5) Strike-slip - Reverse; 6) Reverse - strike-slip and 7) Reverse. FMC uses by default this classification in the resulting diagram, based on the Kaverina et al. (1996) projection, which improves the Frohlich and Apperson (1992) ternary diagram.
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
Kamauu, Aaron W C; DuVall, Scott L; Robison, Reid J; Liimatta, Andrew P; Wiggins, Richard H; Avrin, David E
2006-01-01
Although digital teaching files are important to radiology education, there are no current satisfactory solutions for export of Digital Imaging and Communications in Medicine (DICOM) images from picture archiving and communication systems (PACS) in desktop publishing format. A vendor-neutral digital teaching file, the Radiology Interesting Case Server (RadICS), offers an efficient tool for harvesting interesting cases from PACS without requiring modifications of the PACS configurations. Radiologists push imaging studies from PACS to RadICS via the standard DICOM Send process, and the RadICS server automatically converts the DICOM images into the Joint Photographic Experts Group format, a common desktop publishing format. They can then select key images and create an interesting case series at the PACS workstation. RadICS was tested successfully against multiple unmodified commercial PACS. Using RadICS, radiologists are able to harvest and author interesting cases at the point of clinical interpretation with minimal disruption in clinical work flow. RSNA, 2006
External-Compression Supersonic Inlet Design Code
NASA Technical Reports Server (NTRS)
Slater, John W.
2011-01-01
A computer code named SUPIN has been developed to perform aerodynamic design and analysis of external-compression, supersonic inlets. The baseline set of inlets include axisymmetric pitot, two-dimensional single-duct, axisymmetric outward-turning, and two-dimensional bifurcated-duct inlets. The aerodynamic methods are based on low-fidelity analytical and numerical procedures. The geometric methods are based on planar geometry elements. SUPIN has three modes of operation: 1) generate the inlet geometry from a explicit set of geometry information, 2) size and design the inlet geometry and analyze the aerodynamic performance, and 3) compute the aerodynamic performance of a specified inlet geometry. The aerodynamic performance quantities includes inlet flow rates, total pressure recovery, and drag. The geometry output from SUPIN includes inlet dimensions, cross-sectional areas, coordinates of planar profiles, and surface grids suitable for input to grid generators for analysis by computational fluid dynamics (CFD) methods. The input data file for SUPIN and the output file from SUPIN are text (ASCII) files. The surface grid files are output as formatted Plot3D or stereolithography (STL) files. SUPIN executes in batch mode and is available as a Microsoft Windows executable and Fortran95 source code with a makefile for Linux.
PySE: Python Source Extractor for radio astronomical images
NASA Astrophysics Data System (ADS)
Spreeuw, Hanno; Swinbank, John; Molenaar, Gijs; Staley, Tim; Rol, Evert; Sanders, John; Scheers, Bart; Kuiack, Mark
2018-05-01
PySE finds and measures sources in radio telescope images. It is run with several options, such as the detection threshold (a multiple of the local noise), grid size, and the forced clean beam fit, followed by a list of input image files in standard FITS or CASA format. From these, PySe provides a list of found sources; information such as the calculated background image, source list in different formats (e.g. text, region files importable in DS9), and other data may be saved. PySe can be integrated into a pipeline; it was originally written as part of the LOFAR Transient Detection Pipeline (TraP, ascl:1412.011).
VizieR Online Data Catalog: Planetary atmosphere radiative transport code (Garcia Munoz+ 2015)
NASA Astrophysics Data System (ADS)
Garcia Munoz, A.; Mills, F. P.
2014-08-01
Files are: * readme.txt * Input files: INPUThazeL.txt, INPUTL13.txt, INPUT_L60.txt; they contain explanations to the input parameters. Copy INPUT_XXXX.txt into INPUT.dat to execute some of the examples described in the reference. * Files with scattering matrix properties: phFhazeL.txt, phFL13.txt, phF_L60.txt * Script for compilation in GFortran (myscript) (10 data files).
Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.
Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M
2015-01-01
The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.
NASA Technical Reports Server (NTRS)
Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.
1992-01-01
Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).
correlcalc: Two-point correlation function from redshift surveys
NASA Astrophysics Data System (ADS)
Rohin, Yeluripati
2017-11-01
correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.
1994-07-01
REQUIRED MIX OF SEGMENTS OR INDIVIDUAL DATA ELEMENTS TO BE EXTRACTED. IN SEGMENT R ON AN INTERROGATION TRANSACTION (LTI), DATA RECORD NUMBER (DRN 0950) ONLY...and zation and Marketing input DICs. insert the Continuation Indicator Code (DRN 8555) in position 80 of this record. Maximum of OF The assigned NSN...for Procurement KFR, File Data Minus Security Classified Characteristics Data KFC 8.5-2 DoD 4100.39-M Volume 8 CHAPTER 5 ALPHABETIC INDEX OF DIC
Forensic Analysis of Compromised Computers
NASA Technical Reports Server (NTRS)
Wolfe, Thomas
2004-01-01
Directory Tree Analysis File Generator is a Practical Extraction and Reporting Language (PERL) script that simplifies and automates the collection of information for forensic analysis of compromised computer systems. During such an analysis, it is sometimes necessary to collect and analyze information about files on a specific directory tree. Directory Tree Analysis File Generator collects information of this type (except information about directories) and writes it to a text file. In particular, the script asks the user for the root of the directory tree to be processed, the name of the output file, and the number of subtree levels to process. The script then processes the directory tree and puts out the aforementioned text file. The format of the text file is designed to enable the submission of the file as input to a spreadsheet program, wherein the forensic analysis is performed. The analysis usually consists of sorting files and examination of such characteristics of files as ownership, time of creation, and time of most recent access, all of which characteristics are among the data included in the text file.
OpenDrift - an open source framework for ocean trajectory modeling
NASA Astrophysics Data System (ADS)
Dagestad, Knut-Frode; Breivik, Øyvind; Ådlandsvik, Bjørn
2016-04-01
We will present a new, open source tool for modeling the trajectories and fate of particles or substances (Lagrangian Elements) drifting in the ocean, or even in the atmosphere. The software is named OpenDrift, and has been developed at Norwegian Meteorological Institute in cooperation with Institute of Marine Research. OpenDrift is a generic framework written in Python, and is openly available at https://github.com/knutfrode/opendrift/. The framework is modular with respect to three aspects: (1) obtaining input data, (2) the transport/morphological processes, and (3) exporting of results to file. Modularity is achieved through well defined interfaces between components, and use of a consistent vocabulary (CF conventions) for naming of variables. Modular input implies that it is not necessary to preprocess input data (e.g. currents, wind and waves from Eulerian models) to a particular file format. Instead "reader modules" can be written/used to obtain data directly from any original source, including files or through web based protocols (e.g. OPeNDAP/Thredds). Modularity of processes implies that a model developer may focus on the geophysical processes relevant for the application of interest, without needing to consider technical tasks such as reading, reprojecting, and colocating input data, rotation and scaling of vectors and model output. We will show a few example applications of using OpenDrift for predicting drifters, oil spills, and search and rescue objects.
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
Lee, Woonghee; Kim, Jin Hae; Westler, William M; Markley, John L
2011-06-15
PONDEROSA (Peak-picking Of Noe Data Enabled by Restriction of Shift Assignments) accepts input information consisting of a protein sequence, backbone and sidechain NMR resonance assignments, and 3D-NOESY ((13)C-edited and/or (15)N-edited) spectra, and returns assignments of NOESY crosspeaks, distance and angle constraints, and a reliable NMR structure represented by a family of conformers. PONDEROSA incorporates and integrates external software packages (TALOS+, STRIDE and CYANA) to carry out different steps in the structure determination. PONDEROSA implements internal functions that identify and validate NOESY peak assignments and assess the quality of the calculated three-dimensional structure of the protein. The robustness of the analysis results from PONDEROSA's hierarchical processing steps that involve iterative interaction among the internal and external modules. PONDEROSA supports a variety of input formats: SPARKY assignment table (.shifts) and spectrum file formats (.ucsf), XEASY proton file format (.prot), and NMR-STAR format (.star). To demonstrate the utility of PONDEROSA, we used the package to determine 3D structures of two proteins: human ubiquitin and Escherichia coli iron-sulfur scaffold protein variant IscU(D39A). The automatically generated structural constraints and ensembles of conformers were as good as or better than those determined previously by much less automated means. The program, in the form of binary code along with tutorials and reference manuals, is available at http://ponderosa.nmrfam.wisc.edu/.
Tonkin, M.J.; Hill, Mary C.; Doherty, John
2003-01-01
This document describes the MOD-PREDICT program, which helps evaluate userdefined sets of observations, prior information, and predictions, using the ground-water model MODFLOW-2000. MOD-PREDICT takes advantage of the existing Observation and Sensitivity Processes (Hill and others, 2000) by initiating runs of MODFLOW-2000 and using the output files produced. The names and formats of the MODFLOW-2000 input files are unchanged, such that full backward compatibility is maintained. A new name file and input files are required for MOD-PREDICT. The performance of MOD-PREDICT has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program using the email address available at the web address below. Updates might occasionally be made to this document, to the MOD-PREDICT program, and to MODFLOW- 2000. Users can check for updates on the Internet at URL http://water.usgs.gov/software/ground water.html/.
Orzol, Leonard L.; McGrath, Timothy S.
1992-01-01
This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.
Electronic collection system for spacelab mission timeline requirements
NASA Technical Reports Server (NTRS)
Lindberg, James P.; Piner, John R.; Huang, Allen K. H.
1995-01-01
This paper describes the Functional Objective Requirements Collection System (FORCS) software tool that has been developed for use by Principal Investigators (PI's) and Payload Element Developers (PED's) on their own personal computers to develop on-orbit timelining requirements for their payloads. The FORCS tool can be used either in a totally stand-alone mode, storing the information in a local file on the user's personal computer hard disk or in a remote mode where the user's computer is linked to a host computer containing the integrated database of the timeline requirements for all of the payloads on a mission. There are a number of features incorporated in the FORCS software to assist the user. The user may move freely back and forth between the various forms for inputting the data. Several methods are used to input the information, depending on the type of the information. These methods range from filling in text boxes, using check boxes and radio buttons, to inputting information into a spreadsheet format. There are automated features provided to assist in developing the proper format for the data, ranging from limit checking on some of the parameters to automatic conversion of different formats of time data inputs to the one standard format used for the timeline scheduling software.
A convertor and user interface to import CAD files into worldtoolkit virtual reality systems
NASA Technical Reports Server (NTRS)
Wang, Peter Hor-Ching
1996-01-01
Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Multibody dynamics model building using graphical interfaces
NASA Technical Reports Server (NTRS)
Macala, Glenn A.
1989-01-01
In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.
Incorporating Brokers within Collaboration Environments
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; de Torcy, A.
2013-12-01
A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.
Proposed Computer System for Library Catalog Maintenance. Part II: System Design.
ERIC Educational Resources Information Center
Stein (Theodore) Co., New York, NY.
The logic of the system presented in this report is divided into six parts for computer processing and manipulation. They are: (1) processing of Library of Congress copy, (2) editing of input into standard format, (3) processing of information into and out from the authority files, (4) creation of the catalog records, (5) production of the…
De Oliveira, T; Miller, R; Tarin, M; Cassol, S
2003-01-01
Sequence databases encode a wealth of information needed to develop improved vaccination and treatment strategies for the control of HIV and other important pathogens. To facilitate effective utilization of these datasets, we developed a user-friendly GDE-based LINUX interface that reduces input/output file formatting. GDE was adapted to the Linux operating system, bioinformatics tools were integrated with microbe-specific databases, and up-to-date GDE menus were developed for several clinically important viral, bacterial and parasitic genomes. Each microbial interface was designed for local access and contains Genbank, BLAST-formatted and phylogenetic databases. GDE-Linux is available for research purposes by direct application to the corresponding author. Application-specific menus and support files can be downloaded from (http://www.bioafrica.net).
Shazman, Shula; Celniker, Gershon; Haber, Omer; Glaser, Fabian; Mandel-Gutfreund, Yael
2007-07-01
Positively charged electrostatic patches on protein surfaces are usually indicative of nucleic acid binding interfaces. Interestingly, many proteins which are not involved in nucleic acid binding possess large positive patches on their surface as well. In some cases, the positive patches on the protein are related to other functional properties of the protein family. PatchFinderPlus (PFplus) http://pfp.technion.ac.il is a web-based tool for extracting and displaying continuous electrostatic positive patches on protein surfaces. The input required for PFplus is either a four letter PDB code or a protein coordinate file in PDB format, provided by the user. PFplus computes the continuum electrostatics potential and extracts the largest positive patch for each protein chain in the PDB file. The server provides an output file in PDB format including a list of the patch residues. In addition, the largest positive patch is displayed on the server by a graphical viewer (Jmol), using a simple color coding.
Shazman, Shula; Celniker, Gershon; Haber, Omer; Glaser, Fabian; Mandel-Gutfreund, Yael
2007-01-01
Positively charged electrostatic patches on protein surfaces are usually indicative of nucleic acid binding interfaces. Interestingly, many proteins which are not involved in nucleic acid binding possess large positive patches on their surface as well. In some cases, the positive patches on the protein are related to other functional properties of the protein family. PatchFinderPlus (PFplus) http://pfp.technion.ac.il is a web-based tool for extracting and displaying continuous electrostatic positive patches on protein surfaces. The input required for PFplus is either a four letter PDB code or a protein coordinate file in PDB format, provided by the user. PFplus computes the continuum electrostatics potential and extracts the largest positive patch for each protein chain in the PDB file. The server provides an output file in PDB format including a list of the patch residues. In addition, the largest positive patch is displayed on the server by a graphical viewer (Jmol), using a simple color coding. PMID:17537808
Drenth, B.J.; Grauch, V.J.S.; Bankey, Viki; New Sense Geophysics, Ltd.
2009-01-01
This report contains digital data, image files, and text files describing data formats and survey procedures for two high-resolution aeromagnetic surveys in south-central Colorado: one in the eastern San Luis Valley, Alamosa and Saguache Counties, and the other in the southern Upper Arkansas Valley, Chaffee County. In the San Luis Valley, the Great Sand Dunes survey covers a large part of Great Sand Dunes National Park and Preserve and extends south along the mountain front to the foot of Mount Blanca. In the Upper Arkansas Valley, the Poncha Springs survey covers the town of Poncha Springs and vicinity. The digital files include grids, images, and flight-line data. Several derivative products from these data are also presented as grids and images, including two grids of reduced-to-pole aeromagnetic data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.
NASA Technical Reports Server (NTRS)
Vonhermann, Pieter; Pintz, Adam
1994-01-01
This manual describes the use of the ANSCARES program to prepare a neutral file of FEM stress results taken from ANSYS Release 5.0, in the format needed by CARES/LIFE ceramics reliability program. It is intended for use by experienced users of ANSYS and CARES. Knowledge of compiling and linking FORTRAN programs is also required. Maximum use is made of existing routines (from other CARES interface programs and ANSYS routines) to extract the finite element results and prepare the neutral file for input to the reliability analysis. FORTRAN and machine language routines as described are used to read the ANSYS results file. Sub-element stresses are computed and written to a neutral file using FORTRAN subroutines which are nearly identical to those used in the NASCARES (MSC/NASTRAN to CARES) interface.
Similarities and Differences in Patterns and Geolocation of SSH Attack Data
2015-09-01
failed inputs. ..................................................................................14 Figure 7. Latest “ passwd ” commands entered by...also has fake file contents to allow an attacker to “cat” files like /etc/ passwd [12]. Kippo saves all downloaded files for later inspection. The...overall post-compromise activity, human activity inside the honeypot, top 10 inputs (overall), top 10 successful inputs, top 10 failed inputs, passwd
A standard format and a graphical user interface for spin system specification.
Biternas, A G; Charnock, G T P; Kuprov, Ilya
2014-03-01
We introduce a simple and general XML format for spin system description that is the result of extensive consultations within Magnetic Resonance community and unifies under one roof all major existing spin interaction specification conventions. The format is human-readable, easy to edit and easy to parse using standard XML libraries. We also describe a graphical user interface that was designed to facilitate construction and visualization of complicated spin systems. The interface is capable of generating input files for several popular spin dynamics simulation packages. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
SSM/OOM - SSM WITH OOM MANIPULATION CODE
NASA Technical Reports Server (NTRS)
Goza, S. P.
1994-01-01
Creating, animating, and recording solid-shaded and wireframe three-dimensional geometric models can be of great assistance in the research and design phases of product development, in project planning, and in engineering analyses. SSM and OOM are application programs which together allow for interactive construction and manipulation of three-dimensional models of real-world objects as simple as boxes or as complex as Space Station Freedom. The output of SSM, in the form of binary files defining geometric three dimensional models, is used as input to OOM. Animation in OOM is done using 3D models from SSM as well as cameras and light sources. The animated results of OOM can be output to videotape recorders, film recorders, color printers and disk files. SSM and OOM are also available separately as MSC-21914 and MSC-22263, respectively. The Solid Surface Modeler (SSM) is an interactive graphics software application for solid-shaded and wireframe three-dimensional geometric modeling. The program has a versatile user interface that, in many cases, allows mouse input for intuitive operation or keyboard input when accuracy is critical. SSM can be used as a stand-alone model generation and display program and offers high-fidelity still image rendering. Models created in SSM can also be loaded into the Object Orientation Manipulator for animation or engineering simulation. The Object Orientation Manipulator (OOM) is an application program for creating, rendering, and recording three-dimensional computer-generated still and animated images. This is done using geometrically defined 3D models, cameras, and light sources, referred to collectively as animation elements. OOM does not provide the tools necessary to construct 3D models; instead, it imports binary format model files generated by the Solid Surface Modeler (SSM). Model files stored in other formats must be converted to the SSM binary format before they can be used in OOM. SSM is available as MSC-21914 or as part of the SSM/OOM bundle, COS-10047. Among OOM's features are collision detection (with visual and audio feedback), the capability to define and manipulate hierarchical relationships between animation elements, stereographic display, and ray- traced rendering. OOM uses Euler angle transformations for calculating the results of translation and rotation operations. OOM and SSM are written in C-language for implementation on SGI IRIS 4D series workstations running the IRIX operating system. A minimum of 8Mb of RAM is recommended for each program. The standard distribution medium for this program package is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. These versions of OOM and SSM were released in 1993.
A computer program (MACPUMP) for interactive aquifer-test analysis
Day-Lewis, F. D.; Person, M.A.; Konikow, Leonard F.
1995-01-01
This report introduces MACPUMP (Version 1.0), an aquifer-test-analysis package for use with Macintosh4 computers. The report outlines the input- data format, describes the solutions encoded in the program, explains the menu-items, and offers a tutorial illustrating the use of the program. The package reads list-directed aquifer-test data from a file, plots the data to the screen, generates and plots type curves for several different test conditions, and allows mouse-controlled curve matching. MACPUMP features pull-down menus, a simple text viewer for displaying data-files, and optional on-line help windows. This version includes the analytical solutions for nonleaky and leaky confined aquifers, using both type curves and straight-line methods, and for the analysis of single-well slug tests using type curves. An executable version of the code and sample input data sets are included on an accompanying floppy disk.
NASA Technical Reports Server (NTRS)
Chen, H. C.; Yu, N. Y.
1991-01-01
An Euler flow solver was developed for predicting the airframe/propulsion integration effects for an aft-mounted turboprop transport. This solver employs a highly efficient multigrid scheme, with a successive mesh-refinement procedure to accelerate the convergence of the solution. A new dissipation model was also implemented to render solutions that are grid insensitive. The propeller power effects are simulated by the actuator disk concept. An embedded flow solution method was developed for predicting the detailed flow characteristics in the local vicinity of an aft-mounted propfan engine in the presence of a flow field induced by a complete aircraft. Results from test case analysis are presented. A user's guide for execution of computer programs, including format of various input files, sample job decks, and sample input files, is provided in an accompanying volume.
Preprocessor and postprocessor computer programs for a radial-flow finite-element model
Pucci, A.A.; Pope, D.A.
1987-01-01
Preprocessing and postprocessing computer programs that enhance the utility of the U.S. Geological Survey radial-flow model have been developed. The preprocessor program: (1) generates a triangular finite element mesh from minimal data input, (2) produces graphical displays and tabulations of data for the mesh , and (3) prepares an input data file to use with the radial-flow model. The postprocessor program is a version of the radial-flow model, which was modified to (1) produce graphical output for simulation and field results, (2) generate a statistic for comparing the simulation results with observed data, and (3) allow hydrologic properties to vary in the simulated region. Examples of the use of the processor programs for a hypothetical aquifer test are presented. Instructions for the data files, format instructions, and a listing of the preprocessor and postprocessor source codes are given in the appendixes. (Author 's abstract)
PATHWAYS - ELECTRON TUNNELING PATHWAYS IN PROTEINS
NASA Technical Reports Server (NTRS)
Beratan, D. N.
1994-01-01
The key to understanding the mechanisms of many important biological processes such as photosynthesis and respiration is a better understanding of the electron transfer processes which take place between metal atoms (and other groups) fixed within large protein molecules. Research is currently focused on the rate of electron transfer and the factors that influence it, such as protein composition and the distance between metal atoms. Current models explain the swift transfer of electrons over considerable distances by postulating bridge-mediated tunneling, or physical tunneling pathways, made up of interacting bonds in the medium around and between donor and acceptor sites. The program PATHWAYS is designed to predict the route along which electrons travel in the transfer processes. The basic strategy of PATHWAYS is to begin by recording each possible path element on a connectivity list, including in each entry which two atoms are connected and what contribution the connection would make to the overall rate if it were included in a pathway. The list begins with the bonded molecular structure (including the backbone sequence and side chain connectivity), and then adds probable hydrogen bond links and through-space contacts. Once this list is completed, the program runs a tree search from the donor to the acceptor site to find the dominant pathways. The speed and efficiency of the computer search offers an improvement over manual techniques. PATHWAYS is written in FORTRAN 77 for execution on DEC VAX series computers running VMS. The program inputs data from four data sets and one structure file. The software was written to input BIOGRAF (old format) structure files based on x-ray crystal structures and outputs ASCII files listing the best pathways and BIOGRAF vector files containing the paths. Relatively minor changes could be made in the input format statements for compatibility with other graphics software. The executable and source code are included with the distribution. The main memory requirement for execution is 2.6 Mb. This program is available in DEC VAX BACKUP format on a 9-track 1600 BPI magnetic tape (standard distribution) or on a TK50 tape cartridge. PATHWAYS was developed in 1988. PATHWAYS is a copyrighted work with all copyright vested in NASA. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. BIOGRAF is a trademark of Molecular Simulations, Inc., Sunnyvale, CA.
Automatic Feature Extraction System.
1982-12-01
exploitation. It was used for * processing of black and white and multispectral reconnaissance photography, side-looking synthetic aperture radar imagery...the image data and different software modules for image queing and formatting, the result of the input process will be images in standard AFES file...timely manner. The FFS configuration provides the environment necessary for integrated testing of image processing functions and design and
Dwyer, John L.; Schmidt, Gail L.; Qu, J.J.; Gao, W.; Kafatos, M.; Murphy , R.E.; Salomonson, V.V.
2006-01-01
The MODIS Reprojection Tool (MRT) is designed to help individuals work with MODIS Level-2G, Level-3, and Level-4 land data products. These products are referenced to a global tiling scheme in which each tile is approximately 10° latitude by 10° longitude and non-overlapping (Fig. 9.1). If desired, the user may reproject only selected portions of the product (spatial or parameter subsetting). The software may also be used to convert MODIS products to file formats (generic binary and GeoTIFF) that are more readily compatible with existing software packages. The MODIS land products distributed by the Land Processes Distributed Active Archive Center (LP DAAC) are in the Hierarchical Data Format - Earth Observing System (HDF-EOS), developed by the National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign for the NASA EOS Program. Each HDF-EOS file is comprised of one or more science data sets (SDSs) corresponding to geophysical or biophysical parameters. Metadata are embedded in the HDF file as well as contained in a .met file that is associated with each HDF-EOS file. The MRT supports 8-bit, 16-bit, and 32-bit integer data (both signed and unsigned), as well as 32-bit float data. The data type of the output is the same as the data type of each corresponding input SDS.
ArrayInitiative - a tool that simplifies creating custom Affymetrix CDFs
2011-01-01
Background Probes on a microarray represent a frozen view of a genome and are quickly outdated when new sequencing studies extend our knowledge, resulting in significant measurement error when analyzing any microarray experiment. There are several bioinformatics approaches to improve probe assignments, but without in-house programming expertise, standardizing these custom array specifications as a usable file (e.g. as Affymetrix CDFs) is difficult, owing mostly to the complexity of the specification file format. However, without correctly standardized files there is a significant barrier for testing competing analysis approaches since this file is one of the required inputs for many commonly used algorithms. The need to test combinations of probe assignments and analysis algorithms led us to develop ArrayInitiative, a tool for creating and managing custom array specifications. Results ArrayInitiative is a standalone, cross-platform, rich client desktop application for creating correctly formatted, custom versions of manufacturer-provided (default) array specifications, requiring only minimal knowledge of the array specification rules and file formats. Users can import default array specifications, import probe sequences for a default array specification, design and import a custom array specification, export any array specification to multiple output formats, export the probe sequences for any array specification and browse high-level information about the microarray, such as version and number of probes. The initial release of ArrayInitiative supports the Affymetrix 3' IVT expression arrays we currently analyze, but as an open source application, we hope that others will contribute modules for other platforms. Conclusions ArrayInitiative allows researchers to create new array specifications, in a standard format, based upon their own requirements. This makes it easier to test competing design and analysis strategies that depend on probe definitions. Since the custom array specifications are easily exported to the manufacturer's standard format, researchers can analyze these customized microarray experiments using established software tools, such as those available in Bioconductor. PMID:21548938
PATSTAGS: PATRAN-To-STAGSC-1 Translator
NASA Technical Reports Server (NTRS)
Otte, Neil
1993-01-01
PATSTAGS computer program translates data from PATRAN finite-element mathematical model into STAGS input records used for engineering analysis. Reads data from PATRAN neutral file and writes STAGS input records into STAGS input file and UPRESS data file. Supports translations of nodal constraints, and of nodal, element, force, and pressure data. Written in FORTRAN 77.
CABS-flex: server for fast simulation of protein structure fluctuations
Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2013-01-01
The CABS-flex server (http://biocomp.chem.uw.edu.pl/CABSflex) implements CABS-model–based protocol for the fast simulations of near-native dynamics of globular proteins. In this application, the CABS model was shown to be a computationally efficient alternative to all-atom molecular dynamics—a classical simulation approach. The simulation method has been validated on a large set of molecular dynamics simulation data. Using a single input (user-provided file in PDB format), the CABS-flex server outputs an ensemble of protein models (in all-atom PDB format) reflecting the flexibility of the input structure, together with the accompanying analysis (residue mean-square-fluctuation profile and others). The ensemble of predicted models can be used in structure-based studies of protein functions and interactions. PMID:23658222
CABS-flex: Server for fast simulation of protein structure fluctuations.
Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2013-07-01
The CABS-flex server (http://biocomp.chem.uw.edu.pl/CABSflex) implements CABS-model-based protocol for the fast simulations of near-native dynamics of globular proteins. In this application, the CABS model was shown to be a computationally efficient alternative to all-atom molecular dynamics--a classical simulation approach. The simulation method has been validated on a large set of molecular dynamics simulation data. Using a single input (user-provided file in PDB format), the CABS-flex server outputs an ensemble of protein models (in all-atom PDB format) reflecting the flexibility of the input structure, together with the accompanying analysis (residue mean-square-fluctuation profile and others). The ensemble of predicted models can be used in structure-based studies of protein functions and interactions.
An object oriented fully 3D tomography visual toolkit.
Agostinelli, S; Paoli, G
2001-04-01
In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.
Chapter 21: Programmatic Interfaces - STILTS
NASA Astrophysics Data System (ADS)
Fitzpatrick, M. J.
STILTS is the Starlink Tables Infrastructure Library Tool Set developed by Mark Taylor of the former Starlink Project. STILTS is a command-line tool (see the NVOSS_HOME/bin/stilts command) providing access to the same functionality driving the TOPCAT application and can be run using either the STILTS-specific jar file, or the more general TOPCAT jar file (both are available in the NVOSS_HOME/java/lib directory and are included in the default software environment classpath). The heart of both STILTS and TOPCAT is the STIL Java library. STIL is designed to efficiently handle the input, output and processing of very large tabular datasets and the STILTS task interface makes it an ideal tool for the scripting environment. Multiple formats are supported (including FITS Binary Tables, VOTable, CSV, SQL databases and ASCII, amongst others) and while some tools will generically handle all supported formats, others are specific to the VOTable format. Converting a VOTable to a more script-friendly format is the first thing most users will encounter, but there are many other useful tools as well.
iTOUGH2 Universal Optimization Using the PEST Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.A.
2010-07-01
iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2more » is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models, the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.« less
CFL3D User's Manual (Version 5.0)
NASA Technical Reports Server (NTRS)
Krist, Sherrie L.; Biedron, Robert T.; Rumsey, Christopher L.
1998-01-01
This document is the User's Manual for the CFL3D computer code, a thin-layer Reynolds-averaged Navier-Stokes flow solver for structured multiple-zone grids. Descriptions of the code's input parameters, non-dimensionalizations, file formats, boundary conditions, and equations are included. Sample 2-D and 3-D test cases are also described, and many helpful hints for using the code are provided.
Life and dynamic capacity modeling for aircraft transmissions
NASA Technical Reports Server (NTRS)
Savage, Michael
1991-01-01
A computer program to simulate the dynamic capacity and life of parallel shaft aircraft transmissions is presented. Five basic configurations can be analyzed: single mesh, compound, parallel, reverted, and single plane reductions. In execution, the program prompts the user for the data file prefix name, takes input from a ASCII file, and writes its output to a second ASCII file with the same prefix name. The input data file includes the transmission configuration, the input shaft torque and speed, and descriptions of the transmission geometry and the component gears and bearings. The program output file describes the transmission, its components, their capabilities, locations, and loads. It also lists the dynamic capability, ninety percent reliability, and mean life of each component and the transmission as a system. Here, the program, its input and output files, and the theory behind the operation of the program are described.
Web-based Toolkit for Dynamic Generation of Data Processors
NASA Astrophysics Data System (ADS)
Patel, J.; Dascalu, S.; Harris, F. C.; Benedict, K. K.; Gollberg, G.; Sheneman, L.
2011-12-01
All computation-intensive scientific research uses structured datasets, including hydrology and all other types of climate-related research. When it comes to testing their hypotheses, researchers might use the same dataset differently, and modify, transform, or convert it to meet their research needs. Currently, many researchers spend a good amount of time performing data processing and building tools to speed up this process. They might routinely repeat the same process activities for new research projects, spending precious time that otherwise could be dedicated to analyzing and interpreting the data. Numerous tools are available to run tests on prepared datasets and many of them work with datasets in different formats. However, there is still a significant need for applications that can comprehensively handle data transformation and conversion activities and help prepare the various processed datasets required by the researchers. We propose a web-based application (a software toolkit) that dynamically generates data processors capable of performing data conversions, transformations, and customizations based on user-defined mappings and selections. As a first step, the proposed solution allows the users to define various data structures and, in the next step, can select various file formats and data conversions for their datasets of interest. In a simple scenario, the core of the proposed web-based toolkit allows the users to define direct mappings between input and output data structures. The toolkit will also support defining complex mappings involving the use of pre-defined sets of mathematical, statistical, date/time, and text manipulation functions. Furthermore, the users will be allowed to define logical cases for input data filtering and sampling. At the end of the process, the toolkit is designed to generate reusable source code and executable binary files for download and use by the scientists. The application is also designed to store all data structures and mappings defined by a user (an author), and allow the original author to modify them using standard authoring techniques. The users can change or define new mappings to create new data processors for download and use. In essence, when executed, the generated data processor binary file can take an input data file in a given format and output this data, possibly transformed, in a different file format. If they so desire, the users will be able modify directly the source code in order to define more complex mappings and transformations that are not currently supported by the toolkit. Initially aimed at supporting research in hydrology, the toolkit's functions and features can be either directly used or easily extended to other areas of climate-related research. The proposed web-based data processing toolkit will be able to generate various custom software processors for data conversion and transformation in a matter of seconds or minutes, saving a significant amount of researchers' time and allowing them to focus on core research issues.
NASA Technical Reports Server (NTRS)
Ullman, Richard; Bane, Bob; Yang, Jingli
2008-01-01
A computer program partly automates the task of determining whether an HDF-EOS 5 file is valid in that it conforms to specifications for such characteristics as attribute names, dimensionality of data products, and ranges of legal data values. ["HDF-EOS" and variants thereof are defined in "Converting EOS Data From HDF-EOS to netCDF" (GSC-15007-1), which is the first of several preceding articles in this issue of NASA Tech Briefs.] Previously, validity of a file was determined in a tedious and error-prone process in which a person examined human-readable dumps of data-file-format information. The present software helps a user to encode the specifications for an HDFEOS 5 file, and then inspects the file for conformity with the specifications: First, the user writes the specifications in Extensible Markup Language (XML) by use of a document type definition (DTD) that is part of the program. Next, the portion of the program (denoted the validator) that performs the inspection is executed, using, as inputs, the specifications in XML and the HDF-EOS 5 file to be validated. Finally, the user examines the output of the validator.
Wake Vortex Inverse Model User's Guide
NASA Technical Reports Server (NTRS)
Lai, David; Delisi, Donald
2008-01-01
NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input file, with preferred parameters values, is given in Appendix A. An example of the plot generated at a normal completion of the inversion is shown in Appendix B.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Yoojin; Doughty, Christine
Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less
Lee, Woonghee; Kim, Jin Hae; Westler, William M.; Markley, John L.
2011-01-01
Summary: PONDEROSA (Peak-picking Of Noe Data Enabled by Restriction of Shift Assignments) accepts input information consisting of a protein sequence, backbone and sidechain NMR resonance assignments, and 3D-NOESY (13C-edited and/or 15N-edited) spectra, and returns assignments of NOESY crosspeaks, distance and angle constraints, and a reliable NMR structure represented by a family of conformers. PONDEROSA incorporates and integrates external software packages (TALOS+, STRIDE and CYANA) to carry out different steps in the structure determination. PONDEROSA implements internal functions that identify and validate NOESY peak assignments and assess the quality of the calculated three-dimensional structure of the protein. The robustness of the analysis results from PONDEROSA's hierarchical processing steps that involve iterative interaction among the internal and external modules. PONDEROSA supports a variety of input formats: SPARKY assignment table (.shifts) and spectrum file formats (.ucsf), XEASY proton file format (.prot), and NMR-STAR format (.star). To demonstrate the utility of PONDEROSA, we used the package to determine 3D structures of two proteins: human ubiquitin and Escherichia coli iron-sulfur scaffold protein variant IscU(D39A). The automatically generated structural constraints and ensembles of conformers were as good as or better than those determined previously by much less automated means. Availability: The program, in the form of binary code along with tutorials and reference manuals, is available at http://ponderosa.nmrfam.wisc.edu/. Contact: whlee@nmrfam.wisc.edu; markley@nmrfam.wisc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21511715
Briel, L.I.
1993-01-01
A computer program was written to produce 6 different types of water-quality diagrams--Piper, Stiff, pie, X-Y, boxplot, and Piper 3-D--from the same file of input data. The Piper 3-D diagram is a new method that projects values from the surface of a Piper plot into a triangular prism to show how variations in chemical composition can be related to variations in other water-quality variables. This program is an analytical tool to aid in the interpretation of data. This program is interactive, and the user can select from a menu the type of diagram to be produced and a large number of individual features. Alternatively, these choices can be specified in the data file, which provides a batch mode for running the program. The program does not display water-quality diagrams directly; plots are written to a file. Four different plot- file formats are available: device-independent metafiles, Adobe PostScript graphics files, and two Hewlett-Packard graphics language formats (7475 and 7586). An ASCII data-table file is also produced to document the computed values. This program is written in Fortran '77 and uses graphics subroutines from either the PRIOR AGTK or the DISSPLA graphics library. The program has been implemented on Prime series 50 and Data General Aviion computers within the USGS; portability to other computing systems depends on the availability of the graphics library.
CheckDen, a program to compute quantum molecular properties on spatial grids.
Pacios, Luis F; Fernandez, Alberto
2009-09-01
CheckDen, a program to compute quantum molecular properties on a variety of spatial grids is presented. The program reads as unique input wavefunction files written by standard quantum packages and calculates the electron density rho(r), promolecule and density difference function, gradient of rho(r), Laplacian of rho(r), information entropy, electrostatic potential, kinetic energy densities G(r) and K(r), electron localization function (ELF), and localized orbital locator (LOL) function. These properties can be calculated on a wide range of one-, two-, and three-dimensional grids that can be processed by widely used graphics programs to render high-resolution images. CheckDen offers also other options as extracting separate atom contributions to the property computed, converting grid output data into CUBE and OpenDX volumetric data formats, and perform arithmetic combinations with grid files in all the recognized formats.
Bankey, Viki; Grauch, V.J.S.; Drenth, B.J.; ,
2006-01-01
This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during high-resolution aeromagnetic surveys in southern Colorado and northern New Mexico in December, 2005. One survey covers the eastern edge of the San Luis basin, including the towns of Questa, New Mexico and San Luis, Colorado. A second survey covers the mountain front east of Santa Fe, New Mexico, including the town of Chimayo and portions of the Pueblos of Tesuque and Nambe. Several derivative products from these data are also presented as grids and images, including reduced-to-pole data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.
Image processing tool for automatic feature recognition and quantification
Chen, Xing; Stoddard, Ryan J.
2017-05-02
A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.
RCHILD - an R-package for flexible use of the landscape evolution model CHILD
NASA Astrophysics Data System (ADS)
Dietze, Michael
2014-05-01
Landscape evolution models provide powerful approaches to numerically assess earth surface processes, to quantify rates of landscape change, infer sediment transfer rates, estimate sediment budgets, investigate the consequences of changes in external drivers on a geomorphic system, to provide spatio-temporal interpolations between known landscape states or to test conceptual hypotheses. CHILD (Channel-Hillslope Integrated Landscape Development Model) is one of the most-used models of landscape change in the context of at least tectonic and geomorphologic process interactions. Running CHILD from command line and working with the model output can be a rather awkward task (static model control via text input file, only numeric output in text files). The package RCHILD is a collection of functions for the free statistical software R that help using CHILD in a flexible, dynamic and user-friendly way. The comprised functions allow creating maps, real-time scenes, animations and further thematic plots from model output. The model input files can be modified dynamically and, hence, (feedback-related) changes in external factors can be implemented iteratively. Output files can be written to common formats that can be readily imported to standard GIS software. This contribution presents the basic functionality of the model CHILD as visualised and modified by the package. A rough overview of the available functions is given. Application examples help to illustrate the great potential of numeric modelling of geomorphologic processes.
VizieR Online Data Catalog: Algorithm for correcting CoRoT raw light curves (Mislis+, 2010)
NASA Astrophysics Data System (ADS)
Mislis, D.; Schmitt, J. H. M. M.; Carone, L.; Guenther, E. W.; Patzold, M.
2010-10-01
Requirements : gfortran (or g77, ifort) compiler Input Files : The input files sould be raw CoRoT txt files (http://idoc-corot.ias.u-psud.fr/index.jsp) with names CoRoT*.txt Run the cda by typing C>: ./cda.csh (code and data sould be in the same directory) Output files : CDA creates one ascii output file with name - CoRoT*.R.cor for R filter (2 data files).
DaCHS: Data Center Helper Suite
NASA Astrophysics Data System (ADS)
Demleitner, Markus
2018-04-01
DaCHS, the Data Center Helper Suite, is an integrated package for publishing astronomical data sets to the Virtual Observatory. Network-facing, it speaks the major VO protocols (SCS, SIAP, SSAP, TAP, Datalink, etc). Operator-facing, many input formats, including FITS/WCS, ASCII files, and VOTable, can be processed to publication-ready data. DaCHS puts particular emphasis on integrated metadata handling, which facilitates a tight integration with the VO's Registry
MTpy: A Python toolbox for magnetotellurics
Krieger, Lars; Peacock, Jared R.
2014-01-01
In this paper, we introduce the structure and concept of MTpy . Additionally, we show some examples from an everyday work-flow of MT data processing: the generation of standard EDI data files from raw electric (E-) and magnetic flux density (B-) field time series as input, the conversion into MiniSEED data format, as well as the generation of a graphical data representation in the form of a Phase Tensor pseudosection.
A Study to Determine the Correlation between Continuity of Care and Patient Medication Compliance
1984-08-01
U (III FILE ’Y TO DETERMINE THE CORRELATION BETWEEN CONTINUITY OF CARE AND PATIENT MEDICATION COMPLIANCE IA Graduate Research Project Submitted to...43 APPENDIX A. PATIENT MEDICATION COMPLIANCE QUESTIONNAIRE . . . . . 45 B. COMPUTER CODED INPUT FORMAT . . . . . . . ...... 48 C. RESEARCH DATA...and that adhered to by the patient . This failure to comply with medical recommendations results in a waste of health resources, frustration to the
NASA Technical Reports Server (NTRS)
Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.
2002-01-01
For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.
CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients
NASA Technical Reports Server (NTRS)
Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.
2001-01-01
For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.
Brady's Geothermal Field - March 2016 Vibroseis SEG-Y Files and UTM Locations
Kurt Feigl
2016-03-31
PoroTomo March 2016 (Task 6.4) Updated vibroseis source locations with UTM locations. Supersedes gdr.openei.org/submissions/824. Updated vibroseis source location data for Stages 1-4, PoroTomo March 2016. This revision includes source point locations in UTM format (meters) for all four Stages of active source acquisition. Vibroseis sweep data were collected on a Signature Recorder unit (mfr Seismic Source) mounted in the vibroseis cab during the March 2016 PoroTomo active seismic survey Stages 1 to 4. Each sweep generated a GPS timed SEG-Y file with 4 input channels and a 20 second record length. Ch1 = pilot sweep, Ch2 = accelerometer output from the vibe's mass, Ch3 = accel output from the baseplase, and Ch4 = weighted sum of the accelerometer outputs. SEG-Y files are available via the links below.
Implementation of AAPG exchange format
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiser, K.; Guerrero, I.
1989-03-01
The American Association of Petroleum Geologists (AAPG) has proposed a format for exchanging geologic and other petroleum data. The AAPG Computer Applications Committee approved the proposal at the March 1988 AAPG annual meeting in Houston, Texas. By adopting this format, data input into application software and data exchange between software packages are greatly simplified. Benefits to both users and suppliers of software are substantial. The AAPG exchange format supports a flexible, generic data structure. This flexibility allows application software to use the standard format for storing internal control data. In some cases, extensions to the standard format, such as separationmore » of header and data files and use of data delimiters, permits the use of AAPG format translator programs on data that were defined and generated before the emergence of the exchange format. Translation software, programmed in C, has been written and contributes to successful implementation of the AAPG exchange format in application software.« less
Parallax Player: a stereoscopic format converter
NASA Astrophysics Data System (ADS)
Feldman, Mark H.; Lipton, Lenny
2003-05-01
The Parallax Player is a software application that is, in essence, a stereoscopic format converter. Various formats may be inputted and outputted. In addition to being able to take any one of a wide variety of different formats and play them back on many different kinds of PCs and display screens. The Parallax Player has built into it the capability to produce ersatz stereo from a planar still or movie image. The player handles two basic forms of digital content - still images, and movies. It is assumed that all data is digital, either created by means of a photographic film process and later digitized, or directly captured or authored in a digital form. In its current implementation, running on a number of Windows Operating Systems, The Parallax Player reads in a broad selection of contemporary file formats.
Parkhurst, David L.; Kipp, Kenneth L.; Engesgaard, Peter; Charlton, Scott R.
2004-01-01
The computer program PHAST simulates multi-component, reactive solute transport in three-dimensional saturated ground-water flow systems. PHAST is a versatile ground-water flow and solute-transport simulator with capabilities to model a wide range of equilibrium and kinetic geochemical reactions. The flow and transport calculations are based on a modified version of HST3D that is restricted to constant fluid density and constant temperature. The geochemical reactions are simulated with the geochemical model PHREEQC, which is embedded in PHAST. PHAST is applicable to the study of natural and contaminated ground-water systems at a variety of scales ranging from laboratory experiments to local and regional field scales. PHAST can be used in studies of migration of nutrients, inorganic and organic contaminants, and radionuclides; in projects such as aquifer storage and recovery or engineered remediation; and in investigations of the natural rock-water interactions in aquifers. PHAST is not appropriate for unsaturated-zone flow, multiphase flow, density-dependent flow, or waters with high ionic strengths. A variety of boundary conditions are available in PHAST to simulate flow and transport, including specified-head, flux, and leaky conditions, as well as the special cases of rivers and wells. Chemical reactions in PHAST include (1) homogeneous equilibria using an ion-association thermodynamic model; (2) heterogeneous equilibria between the aqueous solution and minerals, gases, surface complexation sites, ion exchange sites, and solid solutions; and (3) kinetic reactions with rates that are a function of solution composition. The aqueous model (elements, chemical reactions, and equilibrium constants), minerals, gases, exchangers, surfaces, and rate expressions may be defined or modified by the user. A number of options are available to save results of simulations to output files. The data may be saved in three formats: a format suitable for viewing with a text editor; a format suitable for exporting to spreadsheets and post-processing programs; or in Hierarchical Data Format (HDF), which is a compressed binary format. Data in the HDF file can be visualized on Windows computers with the program Model Viewer and extracted with the utility program PHASTHDF; both programs are distributed with PHAST. Operator splitting of the flow, transport, and geochemical equations is used to separate the three processes into three sequential calculations. No iterations between transport and reaction calculations are implemented. A three-dimensional Cartesian coordinate system and finite-difference techniques are used for the spatial and temporal discretization of the flow and transport equations. The non-linear chemical equilibrium equations are solved by a Newton-Raphson method, and the kinetic reaction equations are solved by a Runge-Kutta or an implicit method for integrating ordinary differential equations. The PHAST simulator may require large amounts of memory and long Central Processing Unit (CPU) times. To reduce the long CPU times, a parallel version of PHAST has been developed that runs on a multiprocessor computer or on a collection of computers that are networked. The parallel version requires Message Passing Interface, which is currently (2004) freely available. The parallel version is effective in reducing simulation times. This report documents the use of the PHAST simulator, including running the simulator, preparing the input files, selecting the output files, and visualizing the results. It also presents four examples that verify the numerical method and demonstrate the capabilities of the simulator. PHAST requires three input files. Only the flow and transport file is described in detail in this report. The other two files, the chemistry data file and the database file, are identical to PHREEQC files and the detailed description of these files is found in the PHREEQC documentation.
NASA Astrophysics Data System (ADS)
Chang, C.; Li, M.; Yeh, G.
2010-12-01
The BIOGEOCHEM numerical model (Yeh and Fang, 2002; Fang et al., 2003) was developed with FORTRAN for simulating reaction-based geochemical and biochemical processes with mixed equilibrium and kinetic reactions in batch systems. A complete suite of reactions including aqueous complexation, adsorption/desorption, ion-exchange, redox, precipitation/dissolution, acid-base reactions, and microbial mediated reactions were embodied in this unique modeling tool. Any reaction can be treated as fast/equilibrium or slow/kinetic reaction. An equilibrium reaction is modeled with an implicit finite rate governed by a mass action equilibrium equation or by a user-specified algebraic equation. A kinetic reaction is modeled with an explicit finite rate with an elementary rate, microbial mediated enzymatic kinetics, or a user-specified rate equation. None of the existing models has encompassed this wide array of scopes. To ease the input/output learning curve using the unique feature of BIOGEOCHEM, an interactive graphic user interface was developed with the Microsoft Visual Studio and .Net tools. Several user-friendly features, such as pop-up help windows, typo warning messages, and on-screen input hints, were implemented, which are robust. All input data can be real-time viewed and automated to conform with the input file format of BIOGEOCHEM. A post-processor for graphic visualizations of simulated results was also embedded for immediate demonstrations. By following data input windows step by step, errorless BIOGEOCHEM input files can be created even if users have little prior experiences in FORTRAN. With this user-friendly interface, the time effort to conduct simulations with BIOGEOCHEM can be greatly reduced.
Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valencia, Jayson F.; Dirks, James A.
2008-08-29
EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energymore » Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.« less
NASA Astrophysics Data System (ADS)
Miller, M. E.; Elliot, W.; Billmire, M.; Robichaud, P. R.; Banach, D. M.
2017-12-01
We have built a Rapid Response Erosion Database (RRED, http://rred.mtri.org/rred/) for the continental United States to allow land managers to access properly formatted spatial model inputs for the Water Erosion Prediction Project (WEPP). Spatially-explicit process-based models like WEPP require spatial inputs that include digital elevation models (DEMs), soil, climate and land cover. The online database delivers either a 10m or 30m USGS DEM, land cover derived from the Landfire project, and soil data derived from SSURGO and STATSGO datasets. The spatial layers are projected into UTM coordinates and pre-registered for modeling. WEPP soil parameter files are also created along with linkage files to match both spatial land cover and soils data with the appropriate WEPP parameter files. Our goal is to make process-based models more accessible by preparing spatial inputs ahead of time allowing modelers to focus on addressing scenarios of concern. The database provides comprehensive support for post-fire hydrological modeling by allowing users to upload spatial soil burn severity maps, and within moments returns spatial model inputs. Rapid response is critical following natural disasters. After moderate and high severity wildfires, flooding, erosion, and debris flows are a major threat to life, property and municipal water supplies. Mitigation measures must be rapidly implemented if they are to be effective, but they are expensive and cannot be applied everywhere. Fire, runoff, and erosion risks also are highly heterogeneous in space, creating an urgent need for rapid, spatially-explicit assessment. The database has been used to help assess and plan remediation on over a dozen wildfires in the Western US. Future plans include expanding spatial coverage, improving model input data and supporting additional models. Our goal is to facilitate the use of the best possible datasets and models to support the conservation of soil and water.
Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Dupnick, E.; Wiggins, D.
1980-01-01
An interactive computer program for automatically generating traffic models for the Space Transportation System (STS) is presented. Information concerning run stream construction, input data, and output data is provided. The flow of the interactive data stream is described. Error messages are specified, along with suggestions for remedial action. In addition, formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.
Handling Input and Output for COAMPS
NASA Technical Reports Server (NTRS)
Fitzpatrick, Patrick; Tran, Nam; Li, Yongzuo; Anantharaj, Valentine
2007-01-01
Two suites of software have been developed to handle the input and output of the Coupled Ocean Atmosphere Prediction System (COAMPS), which is a regional atmospheric model developed by the Navy for simulating and predicting weather. Typically, the initial and boundary conditions for COAMPS are provided by a flat-file representation of the Navy s global model. Additional algorithms are needed for running the COAMPS software using global models. One of the present suites satisfies this need for running COAMPS using the Global Forecast System (GFS) model of the National Oceanic and Atmospheric Administration. The first step in running COAMPS downloading of GFS data from an Internet file-transfer-protocol (FTP) server computer of the National Centers for Environmental Prediction (NCEP) is performed by one of the programs (SSC-00273) in this suite. The GFS data, which are in gridded binary (GRIB) format, are then changed to a COAMPS-compatible format by another program in the suite (SSC-00278). Once a forecast is complete, still another program in the suite (SSC-00274) sends the output data to a different server computer. The second suite of software (SSC- 00275) addresses the need to ingest up-to-date land-use-and-land-cover (LULC) data into COAMPS for use in specifying typical climatological values of such surface parameters as albedo, aerodynamic roughness, and ground wetness. This suite includes (1) a program to process LULC data derived from observations by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Terra and Aqua satellites, (2) programs to derive new climatological parameters for the 17-land-use-category MODIS data; and (3) a modified version of a FORTRAN subroutine to be used by COAMPS. The MODIS data files are processed to reformat them into a compressed American Standard Code for Information Interchange (ASCII) format used by COAMPS for efficient processing.
ChromA: signal-based retention time alignment for chromatography-mass spectrometry data.
Hoffmann, Nils; Stoye, Jens
2009-08-15
We describe ChromA, a web-based alignment tool for chromatography-mass spectrometry data from the metabolomics and proteomics domains. Users can supply their data in open and standardized file formats for retention time alignment using dynamic time warping with different configurable local distance and similarity functions. Additionally, user-defined anchors can be used to constrain and speedup the alignment. A neighborhood around each anchor can be added to increase the flexibility of the constrained alignment. ChromA offers different visualizations of the alignment for easier qualitative interpretation and comparison of the data. For the multiple alignment of more than two data files, the center-star approximation is applied to select a reference among input files to align to. ChromA is available at http://bibiserv.techfak.uni-bielefeld.de/chroma. Executables and source code under the L-GPL v3 license are provided for download at the same location.
Tsukamoto, Takafumi; Yasunaga, Takuo
2014-11-01
Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Integrated Geothermal-CO2 Storage Reservoirs: FY1 Final Report
Buscheck, Thomas A.
2012-01-01
The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.
VizieR Online Data Catalog: FADO code (Gomes+, 2017)
NASA Astrophysics Data System (ADS)
Gomes, J. M.; Papaderos, P.
2017-03-01
FADO comes from the Latin word "fatum" that means fate or destiny. It is also a well known genre of Portuguese music, and by choosing this acronym for this spectral synthesis tool we would like to pay tribute to Portugal. The main goal of FADO is to explore the star-formation and chemical enrichment history (the "Fado") of galaxies based on two hitherto unique elements in spectral fitting models: a) self-consistency between the best-fitting star formation history (SFH) and the nebular characteristics of a galaxy (e.g., hydrogen Balmer-line luminosities and equivalent widths; shape of the nebular continuum, including the Balmer and Paschen discontinuity) and b) genetic optimization and artificial intelligence algorithms. This document is part of the FADO v.1 distribution package, which contains two different ascii files, ReadMe and Read_F, and one tarball archive FADOv1.tar.gz. FADOv1.tar.gz contains the binary (executable) compiled in both OpenSuSE 13.2 64bit LINUX (FADO) and MAC OS X (FADO_MACOSX). The former is compatible with most LINUX distributions, while the latter was only tested for Yosemite 10.10.3. It contains the configuration files for running FADO: FADO.config and PLOT.config, as well as the "Simple Stellar Population" (SSP) base library with the base file list Base.BC03.L, the FADO v.1 short manual Read_F and this file (in the ReadMe directory) and, for testing purposes, three characteristic de-redshifted spectra from SDSS-DR7 in ascii format, corresponding to a star-forming (spec1.txt), composite (spec2.txt) and LINER (spec3.txt) galaxy. Auxiliary files needed for execution of FADO (.HIfboundem.ascii, .HeIIfbound.ascii, .HeIfboundem.ascii, grfont.dat and grfont.txt) are also included in the tarball. By decompressing the tarball the following six directories are created: input, output, plots, ReadMe, SSPs and tables (see below for a brief explanation). (2 data files).
A user's guide for DTIZE an interactive digitizing and graphical editing computer program
NASA Technical Reports Server (NTRS)
Thomas, C. C.
1981-01-01
A guide for DTIZE, a two dimensional digitizing program with graphical editing capability, is presented. DTIZE provides the capability to simultaneously create and display a picture on the display screen. Data descriptions may be permanently saved in three different formats. DTIZE creates the picture graphics in the locator mode, thus inputting one coordinate each time the terminator button is pushed. Graphic input devices (GIN) are also used to select function command menu. These menu commands and the program's interactive prompting sequences provide a complete capability for creating, editing, and permanently recording a graphical picture file. DTIZE is written in FORTRAN IV language for the Tektronix 4081 graphic system utilizing the Plot 80 Distributed Graphics Library (DGL) subroutines. The Tektronix 4953/3954 Graphic Tablet with mouse, pen, or joystick are used as graphics input devices to create picture graphics.
A computer program for obtaining airplane configuration plots from digital Datcom input data
NASA Technical Reports Server (NTRS)
Roy, M. L.; Sliwa, S. M.
1983-01-01
A computer program is described which reads the input file for the Stability and Control Digital Datcom program and generates plots from the aircraft configuration data. These plots can be used to verify the geometric input data to the Digital Datcom program. The program described interfaces with utilities available for plotting aircraft configurations by creating a file from the Digital Datcom input data.
ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.
2016-04-01
Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less
NASA Technical Reports Server (NTRS)
Hildreth, Bruce L.; Jackson, E. Bruce
2009-01-01
The American Institute of Aeronautics Astronautics (AIAA) Modeling and Simulation Technical Committee is in final preparation of a new standard for the exchange of flight dynamics models. The standard will become an ANSI standard and is under consideration for submission to ISO for acceptance by the international community. The standard has some a spects that should provide benefits to the simulation training community. Use of the new standard by the training simulation community will reduce development, maintenance and technical refresh investment on each device. Furthermore, it will significantly lower the cost of performing model updates to improve fidelity or expand the envelope of the training device. Higher flight fidelity should result in better transfer of training, a direct benefit to the pilots under instruction. Costs of adopting the standard are minimal and should be paid back within the cost of the first use for that training device. The standard achie ves these advantages by making it easier to update the aerodynamic model. It provides a standard format for the model in a custom eXtensible Markup Language (XML) grammar, the Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML). It employs an existing XML grammar, MathML, to describe the aerodynamic model in an input data file, eliminating the requirement for actual software compilation. The major components of the aero model become simply an input data file, and updates are simply new XML input files. It includes naming and axis system conventions to further simplify the exchange of information.
The Lagrangian particle dispersion model FLEXPART version 10
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Sollum, Espen; Grythe, Henrik; Kristiansen, Nina; Cassiani, Massimo; Eckhardt, Sabine; Thompson, Rona; Groot Zwaaftnik, Christine; Evangeliou, Nikolaos; Hamburger, Thomas; Sodemann, Harald; Haimberger, Leopold; Henne, Stephan; Brunner, Dominik; Burkhart, John; Fouilloux, Anne; Fang, Xuekun; Phillip, Anne; Seibert, Petra; Stohl, Andreas
2017-04-01
The Lagrangian particle dispersion model FLEXPART was in its first original release in 1998 designed for calculating the long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. The model has now evolved into a comprehensive tool for atmospheric transport modelling and analysis. Its application fields are extended to a range of atmospheric transport processes for both atmospheric gases and aerosols, e.g. greenhouse gases, short-lived climate forces like black carbon, volcanic ash and gases as well as studies of the water cycle. We present the newest release, FLEXPART version 10. Since the last publication fully describing FLEXPART (version 6.2), the model code has been parallelised in order to allow for the possibility to speed up computation. A new, more detailed gravitational settling parametrisation for aerosols was implemented, and the wet deposition scheme for aerosols has been heavily modified and updated to provide a more accurate representation of this physical process. In addition, an optional new turbulence scheme for the convective boundary layer is available, that considers the skewness in the vertical velocity distribution. Also, temporal variation and temperature dependence of the OH-reaction are included. Finally, user input files are updated to a more convenient and user-friendly namelist format, and the option to produce the output-files in netCDF-format instead of binary format is implemented. We present these new developments and show recent model applications. Moreover, we also introduce some tools for the preparation of the meteorological input data, as well as for the processing of FLEXPART output data.
Burns, A.W.
1988-01-01
This report describes an interactive-accounting model used to simulate streamflow, chemical-constituent concentrations and loads, and water-supply operations in a river basin. The model uses regression equations to compute flow from incremental (internode) drainage areas. Conservative chemical constituents (typically dissolved solids) also are computed from regression equations. Both flow and water quality loads are accumulated downstream. Optionally, the model simulates the water use and the simplified groundwater systems of a basin. Water users include agricultural, municipal, industrial, and in-stream users , and reservoir operators. Water users list their potential water sources, including direct diversions, groundwater pumpage, interbasin imports, or reservoir releases, in the order in which they will be used. Direct diversions conform to basinwide water law priorities. The model is interactive, and although the input data exist in files, the user can modify them interactively. A major feature of the model is its color-graphic-output options. This report includes a description of the model, organizational charts of subroutines, and examples of the graphics. Detailed format instructions for the input data, example files of input data, definitions of program variables, and listing of the FORTRAN source code are Attachments to the report. (USGS)
Computer program documentation: Raw-to-processed SINDA program (RTOPHS) user's guide
NASA Technical Reports Server (NTRS)
Damico, S. J.
1980-01-01
Use of the Raw to Processed SINDA(System Improved Numerical Differencing Analyzer) Program, RTOPHS, which provides a means of making the temperature prediction data on binary HSTFLO and HISTRY units generated by SINDA available to engineers in an easy to use format, is discussed. The program accomplishes this by reading the HISTRY unit and according to user input instructions, the desired times and temperature prediction data are extracted and written to a word addressable drum file.
NASA Technical Reports Server (NTRS)
Shyam, Vikram
2010-01-01
A preprocessor for the Computational Fluid Dynamics (CFD) code TURBO has been developed and tested. The preprocessor converts grids produced by GridPro (Program Development Company (PDC)) into a format readable by TURBO and generates the necessary input files associated with the grid. The preprocessor also generates information that enables the user to decide how to allocate the computational load in a multiple block per processor scenario.
NIF Ignition Target 3D Point Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O; Marinak, M; Milovich, J
2008-11-05
We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Syntheticmore » diagnostics.« less
NASA Astrophysics Data System (ADS)
Swade, Daryl; Bushouse, Howard; Greene, Gretchen; Swam, Michael
2014-07-01
Science data products for James Webb Space Telescope (JWST) ©observations will be generated by the Data Management Subsystem (DMS) within the JWST Science and Operations Center (S&OC) at the Space Telescope Science Institute (STScI). Data processing pipelines within the DMS will produce uncalibrated and calibrated exposure files, as well as higher level data products that result from combined exposures, such as mosaic images. Information to support the science observations, for example data from engineering telemetry, proposer inputs, and observation planning will be captured and incorporated into the science data products. All files will be generated in Flexible Image Transport System (FITS) format. The data products will be made available through the Mikulski Archive for Space Telescopes (MAST) and adhere to International Virtual Observatory Alliance (IVOA) standard data protocols.
NASA Astrophysics Data System (ADS)
Heller, René
2018-03-01
The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.
Compression of next-generation sequencing quality scores using memetic algorithm
2014-01-01
Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747
Flow prediction for propfan engine installation effects on transport aircraft at transonic speeds
NASA Technical Reports Server (NTRS)
Samant, S. S.; Yu, N. J.
1986-01-01
An Euler-based method for aerodynamic analysis of turboprop transport aircraft at transonic speeds has been developed. In this method, inviscid Euler equations are solved over surface-fitted grids constructed about aircraft configurations. Propeller effects are simulated by specifying sources of momentum and energy on an actuator disc located in place of the propeller. A stripwise boundary layer procedure is included to account for the viscous effects. A preliminary version of an approach to embed the exhaust plume within the global Euler solution has also been developed for more accurate treatment of the exhaust flow. The resulting system of programs is capable of handling wing-body-nacelle-propeller configurations. The propeller disks may be tractors or pushers and may represent single or counterrotation propellers. Results from analyses of three test cases of interest (a wing alone, a wing-body-nacelle model, and a wing-nacelle-endplate model) are presented. A user's manual for executing the system of computer programs with formats of various input files, sample job decks, and sample input files is provided in appendices.
User Manual for the PROTEUS Mesh Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Micheal A.; Shemon, Emily R
2016-09-19
PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation.more » There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less
Optimizing Input/Output Using Adaptive File System Policies
NASA Technical Reports Server (NTRS)
Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.
1996-01-01
Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.
Data handling with SAM and art at the NO vA experiment
Aurisano, A.; Backhouse, C.; Davies, G. S.; ...
2015-12-23
During operations, NOvA produces between 5,000 and 7,000 raw files per day with peaks in excess of 12,000. These files must be processed in several stages to produce fully calibrated and reconstructed analysis files. In addition, many simulated neutrino interactions must be produced and processed through the same stages as data. To accommodate the large volume of data and Monte Carlo, production must be possible both on the Fermilab grid and on off-site farms, such as the ones accessible through the Open Science Grid. To handle the challenge of cataloging these files and to facilitate their off-line processing, we havemore » adopted the SAM system developed at Fermilab. SAM indexes files according to metadata, keeps track of each file's physical locations, provides dataset management facilities, and facilitates data transfer to off-site grids. To integrate SAM with Fermilab's art software framework and the NOvA production workflow, we have developed methods to embed metadata into our configuration files, art files, and standalone ROOT files. A module in the art framework propagates the embedded information from configuration files into art files, and from input art files to output art files, allowing us to maintain a complete processing history within our files. Embedding metadata in configuration files also allows configuration files indexed in SAM to be used as inputs to Monte Carlo production jobs. Further, SAM keeps track of the input files used to create each output file. Parentage information enables the construction of self-draining datasets which have become the primary production paradigm used at NOvA. In this study we will present an overview of SAM at NOvA and how it has transformed the file production framework used by the experiment.« less
de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D
2013-05-24
Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.
2013-01-01
Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. Conclusions The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple “Google-style” searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature. PMID:23705910
Software for Preprocessing Data from Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Cheng, Chiu-Fu
2004-01-01
Three computer programs have been written to preprocess digitized outputs of sensors during rocket-engine tests at Stennis Space Center (SSC). The programs apply exclusively to the SSC E test-stand complex and utilize the SSC file format. The programs are the following: Engineering Units Generator (EUGEN) converts sensor-output-measurement data to engineering units. The inputs to EUGEN are raw binary test-data files, which include the voltage data, a list identifying the data channels, and time codes. EUGEN effects conversion by use of a file that contains calibration coefficients for each channel. QUICKLOOK enables immediate viewing of a few selected channels of data, in contradistinction to viewing only after post-test processing (which can take 30 minutes to several hours depending on the number of channels and other test parameters) of data from all channels. QUICKLOOK converts the selected data into a form in which they can be plotted in engineering units by use of Winplot (a free graphing program written by Rick Paris). EUPLOT provides a quick means for looking at data files generated by EUGEN without the necessity of relying on the PV-WAVE based plotting software.
Digital geologic map of part of the Thompson Falls 1:100,000 quadrangle, Idaho
Lewis, Reed S.; Derkey, Pamela D.
1999-01-01
The geology of the Thompson Falls 1:100,000 quadrangle, Idaho was compiled by Reed S. Lewis in 1997 onto a 1:100,000-scale greenline mylar of the topographic base map for input into a geographic information system (GIS). The resulting digital geologic map GIS can be queried in many ways to produce a variety of geologic maps. Digital base map data files (topography, roads, towns, rivers and lakes, etc.) are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:100,000 (e.g., 1:62,500 or 1:24,000). The map area is located in north Idaho. This open-file report describes the geologic map units, the methods used to convert the geologic map data into a digital format, the Arc/Info GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet.
BOREAS TGB-12 Soil Carbon and Flux Data of NSA-MSA in Raster Format
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Knapp, David E. (Editor); Rapalee, Gloria; Davidson, Eric; Harden, Jennifer W.; Trumbore, Susan E.; Veldhuis, Hugo
2000-01-01
The BOREAS TGB-12 team made measurements of soil carbon inventories, carbon concentration in soil gases, and rates of soil respiration at several sites. This data set provides: (1) estimates of soil carbon stocks by horizon based on soil survey data and analyses of data from individual soil profiles; (2) estimates of soil carbon fluxes based on stocks, fire history, drain-age, and soil carbon inputs and decomposition constants based on field work using radiocarbon analyses; (3) fire history data estimating age ranges of time since last fire; and (4) a raster image and an associated soils table file from which area-weighted maps of soil carbon and fluxes and fire history may be generated. This data set was created from raster files, soil polygon data files, and detailed lab analysis of soils data that were received from Dr. Hugo Veldhuis, who did the original mapping in the field during 1994. Also used were soils data from Susan Trumbore and Jennifer Harden (BOREAS TGB-12). The binary raster file covers a 733-km 2 area within the NSA-MSA.
Software for Preprocessing Data From Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Cheng, Chiu-Fu
2003-01-01
Three computer programs have been written to preprocess digitized outputs of sensors during rocket-engine tests at Stennis Space Center (SSC). The programs apply exclusively to the SSC E test-stand complex and utilize the SSC file format. The programs are the following: (1) Engineering Units Generator (EUGEN) converts sensor-output-measurement data to engineering units. The inputs to EUGEN are raw binary test-data files, which include the voltage data, a list identifying the data channels, and time codes. EUGEN effects conversion by use of a file that contains calibration coefficients for each channel. (2) QUICKLOOK enables immediate viewing of a few selected channels of data, in contradistinction to viewing only after post-test processing (which can take 30 minutes to several hours depending on the number of channels and other test parameters) of data from all channels. QUICKLOOK converts the selected data into a form in which they can be plotted in engineering units by use of Winplot. (3) EUPLOT provides a quick means for looking at data files generated by EUGEN without the necessity of relying on the PVWAVE based plotting software.
The Application of a Statistical Analysis Software Package to Explosive Testing
1993-12-01
deviation not corrected for test interval. M refer to equation 2. s refer to equation 3. G refer to section 2.1, C 36 Appendix I : Program Structured ...APPENDIX I: Program Structured Diagrams 37 APPENDIX II: Bruceton Reference Graphs 39 APPENDIX III: Input and Output Data File Format 44 APPENDIX IV...directly from Graph II, which has been digitised and incorporated into the program . IfM falls below 0.3, the curve that is closest to diff( eq . 3a) is
2011-01-01
all panels of a test were recorded, it was reduced into text format and then input into the code. B. Current Development The capabilities...due to fragmentation. Any or all of these models can be activated for a particular lethality assessment. Incapacitation criteria of different times...defined for all fragments represented in the file. Only the fragment material density needs to be set by the user. 3DPIMMS accounts for some statistical
ESCHER: An interactive mesh-generating editor for preparing finite-element input
NASA Technical Reports Server (NTRS)
Oakes, W. R., Jr.
1984-01-01
ESCHER is an interactive mesh generation and editing program designed to help the user create a finite-element mesh, create additional input for finite-element analysis, including initial conditions, boundary conditions, and slidelines, and generate a NEUTRAL FILE that can be postprocessed for input into several finite-element codes, including ADINA, ADINAT, DYNA, NIKE, TSAAS, and ABUQUS. Two important ESCHER capabilities, interactive geometry creation and mesh archival storge are described in detail. Also described is the interactive command language and the use of interactive graphics. The archival storage and restart file is a modular, entity-based mesh data file. Modules of this file correspond to separate editing modes in the mesh editor, with data definition syntax preserved between the interactive commands and the archival storage file. Because ESCHER was expected to be highly interactive, extensive user documentation was provided in the form of an interactive HELP package.
An installed nacelle design code using a multiblock Euler solver. Volume 2: User guide
NASA Technical Reports Server (NTRS)
Chen, H. C.
1992-01-01
This is a user manual for the general multiblock Euler design (GMBEDS) code. The code is for the design of a nacelle installed on a geometrically complex configuration such as a complete airplane with wing/body/nacelle/pylon. It consists of two major building blocks: a design module developed by LaRC using directive iterative surface curvature (DISC); and a general multiblock Euler (GMBE) flow solver. The flow field surrounding a complex configuration is divided into a number of topologically simple blocks to facilitate surface-fitted grid generation and improve flow solution efficiency. This user guide provides input data formats along with examples of input files and a Unix script for program execution in the UNICOS environment.
NASA Technical Reports Server (NTRS)
Cross, P. L.
1994-01-01
Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.
User guide for MODPATH version 6 - A particle-tracking model for MODFLOW
Pollock, David W.
2012-01-01
MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.
User’s guide for MapMark4GUI—A graphical user interface for the MapMark4 R package
Shapiro, Jason
2018-05-29
MapMark4GUI is an R graphical user interface (GUI) developed by the U.S. Geological Survey to support user implementation of the MapMark4 R statistical software package. MapMark4 was developed by the U.S. Geological Survey to implement probability calculations for simulating undiscovered mineral resources in quantitative mineral resource assessments. The GUI provides an easy-to-use tool to input data, run simulations, and format output results for the MapMark4 package. The GUI is written and accessed in the R statistical programming language. This user’s guide includes instructions on installing and running MapMark4GUI and descriptions of the statistical output processes, output files, and test data files.
The ZPIC educational code suite
NASA Astrophysics Data System (ADS)
Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.
2017-10-01
Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.
Aircraft signal definition for flight safety system monitoring system
NASA Technical Reports Server (NTRS)
Gibbs, Michael (Inventor); Omen, Debi Van (Inventor)
2003-01-01
A system and method compares combinations of vehicle variable values against known combinations of potentially dangerous vehicle input signal values. Alarms and error messages are selectively generated based on such comparisons. An aircraft signal definition is provided to enable definition and monitoring of sets of aircraft input signals to customize such signals for different aircraft. The input signals are compared against known combinations of potentially dangerous values by operational software and hardware of a monitoring function. The aircraft signal definition is created using a text editor or custom application. A compiler receives the aircraft signal definition to generate a binary file that comprises the definition of all the input signals used by the monitoring function. The binary file also contains logic that specifies how the inputs are to be interpreted. The file is then loaded into the monitor function, where it is validated and used to continuously monitor the condition of the aircraft.
xLPR Sim Editor 1.0 User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mariner, Paul E.
2017-03-01
The United States Nuclear Regulatory Commission in cooperation with the Electric Power Research Institute contracted Sandia National Laboratories to develop the framework of a probabilistic fracture mechanics assessment code called xLPR ( Extremely Low Probability of Rupture) Version 2.0 . The purpose of xLPR is to evaluate degradation mechanisms in piping systems at nuclear power plants and to predict the probability of rupture. This report is a user's guide for xLPR Sim Editor 1.0 , a graphical user interface for creating and editing the xLPR Version 2.0 input file and for creating, editing, and using the xLPR Version 2.0 databasemore » files . The xLPR Sim Editor, provides a user - friendly way for users to change simulation options and input values, s elect input datasets from xLPR data bases, identify inputs needed for a simulation, and create and modify an input file for xLPR.« less
Extra dimensions: 3d and time in pdf documentation
NASA Astrophysics Data System (ADS)
Graf, N. A.
2008-07-01
High energy physics is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide audience. In this talk, we present examples of HEP applications which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input. Using this technique, higher dimensional data, such as LEGO plots or time-dependent information can be included in PDF files. In principle, a complete event display, with full interactivity, can be incorporated into a PDF file. This would allow the end user not only to customize the view and representation of the data, but to access the underlying data itself.
Extra Dimensions: 3D and Time in PDF Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graf, Norman A.; /SLAC
2011-11-10
High energy physics is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standardmore » Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide audience. In this talk, we present examples of HEP applications which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input. Using this technique, higher dimensional data, such as LEGO plots or time-dependent information can be included in PDF files. In principle, a complete event display, with full interactivity, can be incorporated into a PDF file. This would allow the end user not only to customize the view and representation of the data, but to access the underlying data itself.« less
Park, Sang-Jun; Lee, Jumin; Patel, Dhilon S; Ma, Hongjing; Lee, Hui Sun; Jo, Sunhwan; Im, Wonpil
2017-10-01
Glycans play a central role in many essential biological processes. Glycan Reader was originally developed to simplify the reading of Protein Data Bank (PDB) files containing glycans through the automatic detection and annotation of sugars and glycosidic linkages between sugar units and to proteins, all based on atomic coordinates and connectivity information. Carbohydrates can have various chemical modifications at different positions, making their chemical space much diverse. Unfortunately, current PDB files do not provide exact annotations for most carbohydrate derivatives and more than 50% of PDB glycan chains have at least one carbohydrate derivative that could not be correctly recognized by the original Glycan Reader. Glycan Reader has been improved and now identifies most sugar types and chemical modifications (including various glycolipids) in the PDB, and both PDB and PDBx/mmCIF formats are supported. CHARMM-GUI Glycan Reader is updated to generate the simulation system and input of various glycoconjugates with most sugar types and chemical modifications. It also offers a new functionality to edit the glycan structures through addition/deletion/modification of glycosylation types, sugar types, chemical modifications, glycosidic linkages, and anomeric states. The simulation system and input files can be used for CHARMM, NAMD, GROMACS, AMBER, GENESIS, LAMMPS, Desmond, OpenMM, and CHARMM/OpenMM. Glycan Fragment Database in GlycanStructure.Org is also updated to provide an intuitive glycan sequence search tool for complex glycan structures with various chemical modifications in the PDB. http://www.charmm-gui.org/input/glycan and http://www.glycanstructure.org. wonpil@lehigh.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Using NJOY to Create MCNP ACE Files and Visualize Nuclear Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahler, Albert Comstock
We provide lecture materials that describe the input requirements to create various MCNP ACE files (Fast, Thermal, Dosimetry, Photo-nuclear and Photo-atomic) with the NJOY Nuclear Data Processing code system. Input instructions to visualize nuclear data with NJOY are also provided.
ChromA: signal-based retention time alignment for chromatography–mass spectrometry data
Hoffmann, Nils; Stoye, Jens
2009-01-01
Summary: We describe ChromA, a web-based alignment tool for chromatography–mass spectrometry data from the metabolomics and proteomics domains. Users can supply their data in open and standardized file formats for retention time alignment using dynamic time warping with different configurable local distance and similarity functions. Additionally, user-defined anchors can be used to constrain and speedup the alignment. A neighborhood around each anchor can be added to increase the flexibility of the constrained alignment. ChromA offers different visualizations of the alignment for easier qualitative interpretation and comparison of the data. For the multiple alignment of more than two data files, the center-star approximation is applied to select a reference among input files to align to. Availability: ChromA is available at http://bibiserv.techfak.uni-bielefeld.de/chroma. Executables and source code under the L-GPL v3 license are provided for download at the same location. Contact: stoye@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19505941
Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe370mcfarlane
Riihimaki, Laura; Shippert, Timothy
2014-11-05
The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.
Broadband Heating Rate Profile Project (BBHRP) - SGP 1bbhrpripbe1mcfarlane
Riihimaki, Laura; Shippert, Timothy
2014-11-05
The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.
Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe1mcfarlane
Riihimaki, Laura; Shippert, Timothy
2014-11-05
The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.
Incorporating uncertainty in RADTRAN 6.0 input files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less
Development of climate data input files for the Mechanistic-Empirical Pavement Design Guide (MEPDG).
DOT National Transportation Integrated Search
2011-06-30
Prior to this effort, Mississippi's MEPDG climate files were limited to 12 weather stations in only 10 countries and only seven weather stations had over 8 years (100 months)of data. Hence, building MEPDG climate input datasets improves modeling accu...
Turbomachinery Forced Response Prediction System (FREPS): User's Manual
NASA Technical Reports Server (NTRS)
Morel, M. R.; Murthy, D. V.
1994-01-01
The turbomachinery forced response prediction system (FREPS), version 1.2, is capable of predicting the aeroelastic behavior of axial-flow turbomachinery blades. This document is meant to serve as a guide in the use of the FREPS code with specific emphasis on its use at NASA Lewis Research Center (LeRC). A detailed explanation of the aeroelastic analysis and its development is beyond the scope of this document, and may be found in the references. FREPS has been developed by the NASA LeRC Structural Dynamics Branch. The manual is divided into three major parts: an introduction, the preparation of input, and the procedure to execute FREPS. Part 1 includes a brief background on the necessity of FREPS, a description of the FREPS system, the steps needed to be taken before FREPS is executed, an example input file with instructions, presentation of the geometric conventions used, and the input/output files employed and produced by FREPS. Part 2 contains a detailed description of the command names needed to create the primary input file that is required to execute the FREPS code. Also, Part 2 has an example data file to aid the user in creating their own input files. Part 3 explains the procedures required to execute the FREPS code on the Cray Y-MP, a computer system available at the NASA LeRC.
FlaME: Flash Molecular Editor - a 2D structure input tool for the web.
Dallakian, Pavel; Haider, Norbert
2011-02-01
So far, there have been no Flash-based web tools available for chemical structure input. The authors herein present a feasibility study, aiming at the development of a compact and easy-to-use 2D structure editor, using Adobe's Flash technology and its programming language, ActionScript. As a reference model application from the Java world, we selected the Java Molecular Editor (JME). In this feasibility study, we made an attempt to realize a subset of JME's functionality in the Flash Molecular Editor (FlaME) utility. These basic capabilities are: structure input, editing and depiction of single molecules, data import and export in molfile format. The result of molecular diagram sketching in FlaME is accessible in V2000 molfile format. By integrating the molecular editor into a web page, its communication with the HTML elements on this page is established using the two JavaScript functions, getMol() and setMol(). In addition, structures can be copied to the system clipboard. A first attempt was made to create a compact single-file application for 2D molecular structure input/editing on the web, based on Flash technology. With the application examples presented in this article, it could be demonstrated that the Flash methods are principally well-suited to provide the requisite communication between the Flash object (application) and the HTML elements on a web page, using JavaScript functions.
76 FR 43679 - Filing via the Internet; Notice of Additional File Formats for efiling
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-21
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RM07-16-000] Filing via the Internet; Notice of Additional File Formats for efiling Take notice that the Commission has added to its list of acceptable file formats the four-character file extensions for Microsoft Office 2007/2010...
An Efficient Method for Verifying Gyrokinetic Microstability Codes
NASA Astrophysics Data System (ADS)
Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.
2009-11-01
Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.
Program Description: EDIT Program and Vendor Master Update, SWRL Financial System.
ERIC Educational Resources Information Center
Ikeda, Masumi
Computer routines to edit input data for the Southwest Regional Laboratory's (SWRL) Financial System are described. The program is responsible for validating input records, generating records for further system processing, and updating the Vendor Master File--a file containing the information necessary to support the accounts payable and…
NASA Technical Reports Server (NTRS)
Denn, F. M.
1978-01-01
Geometric input plotting to the VORLAX computer program by means of an interactive remote terminal is reported. The software consists of a procedure file and two programs. The programs and procedure file are described and a sample execution is presented.
Finite difference time domain grid generation from AMC helicopter models
NASA Technical Reports Server (NTRS)
Cravey, Robin L.
1992-01-01
A simple technique is presented which forms a cubic grid model of a helicopter from an Aircraft Modeling Code (AMC) input file. The AMC input file defines the helicopter fuselage as a series of polygonal cross sections. The cubic grid model is used as an input to a Finite Difference Time Domain (FDTD) code to obtain predictions of antenna performance on a generic helicopter model. The predictions compare reasonably well with measured data.
Java Image I/O for VICAR, PDS, and ISIS
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Levoe, Steven R.
2011-01-01
This library, written in Java, supports input and output of images and metadata (labels) in the VICAR, PDS image, and ISIS-2 and ISIS-3 file formats. Three levels of access exist. The first level comprises the low-level, direct access to the file. This allows an application to read and write specific image tiles, lines, or pixels and to manipulate the label data directly. This layer is analogous to the C-language "VICAR Run-Time Library" (RTL), which is the image I/O library for the (C/C++/Fortran) VICAR image processing system from JPL MIPL (Multimission Image Processing Lab). This low-level library can also be used to read and write labeled, uncompressed images stored in formats similar to VICAR, such as ISIS-2 and -3, and a subset of PDS (image format). The second level of access involves two codecs based on Java Advanced Imaging (JAI) to provide access to VICAR and PDS images in a file-format-independent manner. JAI is supplied by Sun Microsystems as an extension to desktop Java, and has a number of codecs for formats such as GIF, TIFF, JPEG, etc. Although Sun has deprecated the codec mechanism (replaced by IIO), it is still used in many places. The VICAR and PDS codecs allow any program written using the JAI codec spec to use VICAR or PDS images automatically, with no specific knowledge of the VICAR or PDS formats. Support for metadata (labels) is included, but is format-dependent. The PDS codec, when processing PDS images with an embedded VIAR label ("dual-labeled images," such as used for MER), presents the VICAR label in a new way that is compatible with the VICAR codec. The third level of access involves VICAR, PDS, and ISIS Image I/O plugins. The Java core includes an "Image I/O" (IIO) package that is similar in concept to the JAI codec, but is newer and more capable. Applications written to the IIO specification can use any image format for which a plug-in exists, with no specific knowledge of the format itself.
NASA Technical Reports Server (NTRS)
Kotler, R. S.
1983-01-01
File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.
Data reduction software for LORAN-C flight test evaluation
NASA Technical Reports Server (NTRS)
Fischer, J. P.
1979-01-01
A set of programs designed to be run on an IBM 370/158 computer to read the recorded time differences from the tape produced by the LORAN data collection system, convert them to latitude/longitude and produce various plotting input files are described. The programs were written so they may be tailored easily to meet the demands of a particular data reduction job. The tape reader program is written in 370 assembler language and the remaining programs are written in standard IBM FORTRAN-IV language. The tape reader program is dependent upon the recording format used by the data collection system and on the I/O macros used at the computing facility. The other programs are generally device-independent, although the plotting routines are dependent upon the plotting method used. The data reduction programs convert the recorded data to a more readily usable form; convert the time difference (TD) numbers to latitude/longitude (lat/long), to format a printed listing of the TDs, lat/long, reference times, and other information derived from the data, and produce data files which may be used for subsequent plotting.
Gene Graphics: a genomic neighborhood data visualization web application.
Harrison, Katherine J; Crécy-Lagard, Valérie de; Zallot, Rémi
2018-04-15
The examination of gene neighborhood is an integral part of comparative genomics but no tools to produce publication quality graphics of gene clusters are available. Gene Graphics is a straightforward web application for creating such visuals. Supported inputs include National Center for Biotechnology Information gene and protein identifiers with automatic fetching of neighboring information, GenBank files and data extracted from the SEED database. Gene representations can be customized for many parameters including gene and genome names, colors and sizes. Gene attributes can be copied and pasted for rapid and user-friendly customization of homologous genes between species. In addition to Portable Network Graphics and Scalable Vector Graphics, produced representations can be exported as Tagged Image File Format or Encapsulated PostScript, formats that are standard for publication. Hands-on tutorials with real life examples inspired from publications are available for training. Gene Graphics is freely available at https://katlabs.cc/genegraphics/ and source code is hosted at https://github.com/katlabs/genegraphics. katherinejh@ufl.edu or remizallot@ufl.edu. Supplementary data are available at Bioinformatics online.
TagDigger: user-friendly extraction of read counts from GBS and RAD-seq data.
Clark, Lindsay V; Sacks, Erik J
2016-01-01
In genotyping-by-sequencing (GBS) and restriction site-associated DNA sequencing (RAD-seq), read depth is important for assessing the quality of genotype calls and estimating allele dosage in polyploids. However, existing pipelines for GBS and RAD-seq do not provide read counts in formats that are both accurate and easy to access. Additionally, although existing pipelines allow previously-mined SNPs to be genotyped on new samples, they do not allow the user to manually specify a subset of loci to examine. Pipelines that do not use a reference genome assign arbitrary names to SNPs, making meta-analysis across projects difficult. We created the software TagDigger, which includes three programs for analyzing GBS and RAD-seq data. The first script, tagdigger_interactive.py, rapidly extracts read counts and genotypes from FASTQ files using user-supplied sets of barcodes and tags. Input and output is in CSV format so that it can be opened by spreadsheet software. Tag sequences can also be imported from the Stacks, TASSEL-GBSv2, TASSEL-UNEAK, or pyRAD pipelines, and a separate file can be imported listing the names of markers to retain. A second script, tag_manager.py, consolidates marker names and sequences across multiple projects. A third script, barcode_splitter.py, assists with preparing FASTQ data for deposit in a public archive by splitting FASTQ files by barcode and generating MD5 checksums for the resulting files. TagDigger is open-source and freely available software written in Python 3. It uses a scalable, rapid search algorithm that can process over 100 million FASTQ reads per hour. TagDigger will run on a laptop with any operating system, does not consume hard drive space with intermediate files, and does not require programming skill to use.
Andiamo, a Graphical User Interface for Ohio University's Hauser-Feshbach Implementation
NASA Astrophysics Data System (ADS)
Brooks, Matthew
2017-09-01
First and foremost, I am not a physicist. I am an undergraduate computer science major/Japanese minor at Ohio University. However, I am working for Zach Meisel, in the Ohio University's physics department. This is the first software development project I've ever done. My charge is/was to create a graphical program that can be used to more easily set up Hauser-Feshbach equation input files. The input files are of the format expected by the Hauser-Feshbach 2002 code developed by a handful of people at the university. I regularly attend group meetings with Zach and his other subordinates, but these are mostly used as a way for us to discuss our progress and any troubles or roadblocks we may have encountered. I was encouraged to try to come with his group to this event because it could help expose me to the scientific culture of astrophysics research. While I know very little about particles and epic space events, my poster would be an informative and (hopefully) inspiring one that could help get other undergraduates interested in doing object oriented programming. This could be more exposure for them, as I believe a lot of physics majors only learn scripting languages.
AgMIP Training in Multiple Crop Models and Tools
NASA Technical Reports Server (NTRS)
Boote, Kenneth J.; Porter, Cheryl H.; Hargreaves, John; Hoogenboom, Gerrit; Thornburn, Peter; Mutter, Carolyn
2015-01-01
The Agricultural Model Intercomparison and Improvement Project (AgMIP) has the goal of using multiple crop models to evaluate climate impacts on agricultural production and food security in developed and developing countries. There are several major limitations that must be overcome to achieve this goal, including the need to train AgMIP regional research team (RRT) crop modelers to use models other than the ones they are currently familiar with, plus the need to harmonize and interconvert the disparate input file formats used for the various models. Two activities were followed to address these shortcomings among AgMIP RRTs to enable them to use multiple models to evaluate climate impacts on crop production and food security. We designed and conducted courses in which participants trained on two different sets of crop models, with emphasis on the model of least experience. In a second activity, the AgMIP IT group created templates for inputting data on soils, management, weather, and crops into AgMIP harmonized databases, and developed translation tools for converting the harmonized data into files that are ready for multiple crop model simulations. The strategies for creating and conducting the multi-model course and developing entry and translation tools are reviewed in this chapter.
An expert system shell for inferring vegetation characteristics: The learning system (tasks C and D)
NASA Technical Reports Server (NTRS)
Harrison, P. Ann; Harrison, Patrick R.
1992-01-01
This report describes the implementation of a learning system that uses a data base of historical cover type reflectance data taken at different solar zenith angles and wavelengths to learn class descriptions of classes of cover types. It has been integrated with the VEG system and requires that the VEG system be loaded to operate. VEG is the NASA VEGetation workbench - an expert system for inferring vegetation characteristics from reflectance data. The learning system provides three basic options. Using option one, the system learns class descriptions of one or more classes. Using option two, the system learns class descriptions of one or more classes and then uses the learned classes to classify an unknown sample. Using option three, the user can test the system's classification performance. The learning system can also be run in an automatic mode. In this mode, options two and three are executed on each sample from an input file. The system was developed using KEE. It is menu driven and contains a sophisticated window and mouse driven interface which guides the user through various computations. Input and output file management and data formatting facilities are also provided.
... HEADS UP Resources Training Custom PDFs Mobile Apps Videos Graphics Podcasts Social Media File Formats Help: How do I view different file formats (PDF, DOC, PPT, MPEG) on this site? Adobe PDF file Microsoft PowerPoint ... file Apple Quicktime file RealPlayer file Text file ...
NASA Astrophysics Data System (ADS)
Al-Mishwat, Ali T.
2016-05-01
PHASS99 is a FORTRAN program designed to retrieve and decode radiometric and other physical age information of igneous rocks contained in the international database IGBADAT (Igneous Base Data File). In the database, ages are stored in a proprietary format using mnemonic representations. The program can handle up to 99 ages in an igneous rock specimen and caters to forty radiometric age systems. The radiometric age alphanumeric strings assigned to each specimen description in the database consist of four components: the numeric age and its exponential modifier, a four-character mnemonic method identification, a two-character mnemonic name of analysed material, and the reference number in the rock group bibliography vector. For each specimen, the program searches for radiometric age strings, extracts them, parses them, decodes the different age components, and converts them to high-level English equivalents. IGBADAT and similarly-structured files are used for input. The output includes three files: a flat raw ASCII text file containing retrieved radiometric age information, a generic spreadsheet-compatible file for data import to spreadsheets, and an error file. PHASS99 builds on the old program TSTPHA (Test Physical Age) decoder program and expands greatly its capabilities. PHASS99 is simple, user friendly, fast, efficient, and does not require users to have knowledge of programing.
LANDSAT: Non-US standard catalog no. N-36. [LANDSAT imagery for August, 1975
NASA Technical Reports Server (NTRS)
1975-01-01
Information regarding the availability of LANDSAT imagery processed and input to the data files by the NASA Data Processing Facility is published on a monthly basis. The U.S. Standard Catalog includes imagery covering the continental United States, Alaska, and Hawaii. The Non-U.S. Standard Catalog identifies all the remaining coverage. Sections 1 and 2 describe the contents and format for the catalogs and the associated microfilm. Section 3 provides a cross reference defining the beginning and ending dates for LANDSAT cycles.
LANDSAT 2 world standard catalog, 1 May - 31 July 1978. [LANDSAT imagery for May through July 1978
NASA Technical Reports Server (NTRS)
1978-01-01
Information regarding the availability of LANDSAT imagery processed and input to the data files by the NASA Data Processing Facility is published on a monthly basis. The U.S. Standard Catalog includes imagery covering the continental United States, Alaska and Hawaii. The Non-U.S. Standard Catalog identifies all the remaining coverage. Sections 1 and 2 describe the contents and format for the catalogs and the associated microfilm. Section 3 provides a cross-reference defining the beginning and ending dates for LANDSAT cycles.
LANDSAT: Non-US standard catalog no. N-30. [LANDSAT imagery for February, 1975
NASA Technical Reports Server (NTRS)
1975-01-01
Information regarding the availability of LANDSAT imagery processed and input to the data files by the NASA Data Processing Facility is published on a monthly basis. The U.S. Standard Catalog includes imagery covering the continental United States, Alaska, and Hawaii. The Non-U.S. Standard Catalog identifies all the remaining coverage. Sections 1 and 2 describe the contents and format for the catalogs and the associated microfilm. Section 3 provides a cross-reference defining the beginning and ending dates for LANDSAT cycles.
NASA Technical Reports Server (NTRS)
Smith, Peter M.; Kempler, Steven; Leptoukh, Gregory; Savtchenko, Andrey; Kummerer, Robert; Gopolan, Arun
2008-01-01
ATDD is a web based tool which provides collocated data and display products for a number of A-train instruments Cloudsat, Calipso, OMI, AIRS, MODIS, MLS, POLDER-3, and ECWMF model data. Products provided include Clouds, Aerosols, Water Vapor, Temperatures and trace gases. All input data is online and in HDF4, HDF5 format. Display products include curtain images, horizontal strips, line plot overlays, and GE kmz files. Sample products are shown for two type of events. Hurricane event, Norbert, Oct 8, 2008 and a dust storm event over the Arabian Sea, Nov 13-14, 2008.
Interdisciplinary Research Scenario Testing of EOSDIS
NASA Technical Reports Server (NTRS)
Emmitt, G. D.
1999-01-01
During the reporting period, the Principle Investigator (PI) has continued to serve on numerous review panels, task forces and committees with the goal of providing input and guidance for the Earth Observing System Data and Information System (EOSDIS) program at NASA Headquarters and NASA GSFC. In addition, the PI has worked together with personnel at the University of Virginia and the subcontractor (Simpson Weather Associates (SWA)) to continue to evaluate the latest releases of various versions of the user interfaces to the EOSDIS. Finally, as part of the subcontract, SWA has created an on-line Hierarchial Data Format (HDF) tutorial for non-HDF experts, particularly those that will be using EOSDIS and future EOS data products. A summary of these three activities is provided. The topics include: 1) Participation on EODIS Panels and Committees; 2) Evaluation and Tire Kicking of EODIS User Interfaces; and 3) An On-line HDF Tutorial. The report also includes attachments A, B, and C. Attachment A: Report From the May 1999 Science Data Panel. The topics include: 1) Summary of Data Panel Meeting; and 2) Panel's Comments/Recommendations. Attachment B: Survey Requesting Integrated Design Systems (IDS) Teams Input on the Descoping and Rescoping of the EODIS; and Attachment C: An HDF Tutorial for Beginners: EODIS Users and Small Data Providers (HTML Version). The topics include: 1) Tutorial Overview; 2) An introduction to HDF; 3) The HDF Library: Software and Hardware; 4) Methods of Working with HDF Files; 5) Scientific Data API; 6) Attributes and Metadata; 7) Writing a SDS to an HDF file; 8) Obtaining Information on Existing HDF Files; 9) Reading a Scientific Data Set from an HDF file: 10) Example Programs; 11) Browsing and Visualizing HDF Data; and 12) Laboratory (Question and Answer).
Software for Preprocessing Data From Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Cheng, Chiu-Fu
2002-01-01
Three computer programs have been written to preprocess digitized outputs of sensors during rocket-engine tests at Stennis Space Center (SSC). The programs apply exclusively to the SSC "E" test-stand complex and utilize the SSC file format. The programs are the following: 1) Engineering Units Generator (EUGEN) converts sensor-output-measurement data to engineering units. The inputs to EUGEN are raw binary test-data files, which include the voltage data, a list identifying the data channels, and time codes. EUGEN effects conversion by use of a file that contains calibration coefficients for each channel; 2) QUICKLOOK enables immediate viewing of a few selected channels of data, in contradistinction to viewing only after post test processing (which can take 30 minutes to several hours depending on the number of channels and other test parameters) of data from all channels. QUICKLOOK converts the selected data into a form in which they can be plotted in engineering units by use of Winplot (a free graphing program written by Rick Paris); and 3) EUPLOT provides a quick means for looking at data files generated by EUGEN without the necessity of relying on the PVWAVE based plotting software.
Co-PylotDB - A Python-Based Single-Window User Interface for Transmitting Information to a Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnette, Daniel W.
2012-01-05
Co-PylotDB, written completely in Python, provides a user interface (UI) with which to select user and data file(s), directories, and file content, and provide or capture various other information for sending data collected from running any computer program to a pre-formatted database table for persistent storage. The interface allows the user to select input, output, make, source, executable, and qsub files. It also provides fields for specifying the machine name on which the software was run, capturing compile and execution lines, and listing relevant user comments. Data automatically captured by Co-PylotDB and sent to the database are user, current directory,more » local hostname, current date, and time of send. The UI provides fields for logging into a local or remote database server, specifying a database and a table, and sending the information to the selected database table. If a server is not available, the UI provides for saving the command that would have saved the information to a database table for either later submission or for sending via email to a collaborator who has access to the desired database.« less
The dairy_wa.zip file is a zip file containing an Arc/Info export file and a text document. Note the DISCLAIM.TXT file as these data are not verified. Map extent: statewide. Input Source: Address database obtained from Wa Dept of Agriculture. Data was originally developed und...
User input verification and test driven development in the NJOY21 nuclear data processing code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trainer, Amelia Jo; Conlin, Jeremy Lloyd; McCartney, Austin Paul
Before physically-meaningful data can be used in nuclear simulation codes, the data must be interpreted and manipulated by a nuclear data processing code so as to extract the relevant quantities (e.g. cross sections and angular distributions). Perhaps the most popular and widely-trusted of these processing codes is NJOY, which has been developed and improved over the course of 10 major releases since its creation at Los Alamos National Laboratory in the mid-1970’s. The current phase of NJOY development is the creation of NJOY21, which will be a vast improvement from its predecessor, NJOY2016. Designed to be fast, intuitive, accessible, andmore » capable of handling both established and modern formats of nuclear data, NJOY21 will address many issues that many NJOY users face, while remaining functional for those who prefer the existing format. Although early in its development, NJOY21 is quickly providing input validation to check user input. By providing rapid and helpful responses to users while writing input files, NJOY21 will prove to be more intuitive and easy to use than any of its predecessors. Furthermore, during its development, NJOY21 is subject to regular testing, such that its test coverage must strictly increase with the addition of any production code. This thorough testing will allow developers and NJOY users to establish confidence in NJOY21 as it gains functionality. This document serves as a discussion regarding the current state input checking and testing practices of NJOY21.« less
Samadian, Soroush; Bruce, Jeff P; Pugh, Trevor J
2018-03-01
Somatic copy number variations (CNVs) play a crucial role in development of many human cancers. The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data. However, systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets. To address this need, we have developed Bamgineer, a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping (BAM) file, with a focus on targeted and exome sequencing experiments. As input, this tool requires a read alignment file (BAM format), lists of non-overlapping genome coordinates for introduction of gains and losses (bed file), and an optional file defining known haplotypes (vcf format). To improve runtime performance, Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster. As proof-of-principle, we applied Bamgineer to a single high-coverage (mean: 220X) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels (20-100%, 150 BAM files in total). To demonstrate feasibility beyond exome data, we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA (10, 1, 0.1 and 0.01%) while retaining the multimodal insert size distribution of the original data. We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications. The source code is freely available at http://github.com/pughlab/bamgineer.
VizieR Online Data Catalog: Habitable zones around main-sequence stars (Kopparapu+, 2014)
NASA Astrophysics Data System (ADS)
Kopparapu, R. K.; Ramirez, R. M.; Schottelkotte, J.; Kasting, J. F.; Domagal-Goldman, S.; Eymet, V.
2017-08-01
Language: Fortran 90 Code tested under the following compilers/operating systems: ifort/CentOS linux Description of input data: No input necessary. Description of output data: Output files: HZs.dat, HZ_coefficients.dat System requirements: No major system requirement. Fortran compiler necessary. Calls to external routines: None. Additional comments: None (1 data file).
DOE-2 sample run book: Version 2.1E
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkelmann, F.C.; Birdsall, B.E.; Buhl, W.F.
1993-11-01
The DOE-2 Sample Run Book shows inputs and outputs for a variety of building and system types. The samples start with a simple structure and continue to a high-rise office building, a medical building, three small office buildings, a bar/lounge, a single-family residence, a small office building with daylighting, a single family residence with an attached sunspace, a ``parameterized`` building using input macros, and a metric input/output example. All of the samples use Chicago TRY weather. The main purpose of the Sample Run Book is instructional. It shows the relationship of LOADS-SYSTEMS-PLANT-ECONOMICS inputs, displays various input styles, and illustrates manymore » of the basic and advanced features of the program. Many of the sample runs are preceded by a sketch of the building showing its general appearance and the zoning used in the input. In some cases we also show a 3-D rendering of the building as produced by the program DrawBDL. Descriptive material has been added as comments in the input itself. We find that a number of users have loaded these samples onto their editing systems and use them as ``templates`` for creating new inputs. Another way of using them would be to store various portions as files that can be read into the input using the {number_sign}{number_sign} include command, which is part of the Input Macro feature introduced in version DOE-2.lD. Note that the energy rate structures here are the same as in the DOE-2.lD samples, but have been rewritten using the new DOE-2.lE commands and keywords for ECONOMICS. The samples contained in this report are the same as those found on the DOE-2 release files. However, the output numbers that appear here may differ slightly from those obtained from the release files. The output on the release files can be used as a check set to compare results on your computer.« less
WORM - WINDOWED OBSERVATION OF RELATIVE MOTION
NASA Technical Reports Server (NTRS)
Bauer, F.
1994-01-01
The Windowed Observation of Relative Motion, WORM, program is primarily intended for the generation of simple X-Y plots from data created by other programs. It allows the user to label, zoom, and change the scale of various plots. Three dimensional contour and line plots are provided, although with more limited capabilities. The input data can be in binary or ASCII format, although all data must be in the same format. A great deal of control over the details of the plot is provided, such as gridding, size of tick marks, colors, log/semilog capability, time tagging, and multiple and phase plane plots. Many color and monochrome graphics terminals and hard copy printer/plotters are supported. The WORM executive commands, menu selections and macro files can be used to develop plots and tabular data, query the WORM Help library, retrieve data from input files, and invoke VAX DCL commands. WORM generated plots are displayed on local graphics terminals and can be copied using standard hard copy capabilities. Some of the graphics features of WORM include: zooming and dezooming various portions of the plot; plot documentation including curve labeling and function listing; multiple curves on the same plot; windowing of multiple plots and insets of the same plot; displaying a specific on a curve; and spinning the curve left, right, up, and down. WORM is written in PASCAL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.7 with a virtual memory requirement of approximately 392K of 8 bit bytes. It uses the QPLOT device independent graphics library included with WORM. It was developed in 1988.
NASA Technical Reports Server (NTRS)
Muss, J. A.; Nguyen, T. V.; Johnson, C. W.
1991-01-01
The appendices A-K to the user's manual for the rocket combustor interactive design (ROCCID) computer program are presented. This includes installation instructions, flow charts, subroutine model documentation, and sample output files. The ROCCID program, written in Fortran 77, provides a standardized methodology using state of the art codes and procedures for the analysis of a liquid rocket engine combustor's steady state combustion performance and combustion stability. The ROCCID is currently capable of analyzing mixed element injector patterns containing impinging like doublet or unlike triplet, showerhead, shear coaxial and swirl coaxial elements as long as only one element type exists in each injector core, baffle, or barrier zone. Real propellant properties of oxygen, hydrogen, methane, propane, and RP-1 are included in ROCCID. The properties of other propellants can be easily added. The analysis models in ROCCID can account for the influences of acoustic cavities, helmholtz resonators, and radial thrust chamber baffles on combustion stability. ROCCID also contains the logic to interactively create a combustor design which meets input performance and stability goals. A preliminary design results from the application of historical correlations to the input design requirements. The steady state performance and combustion stability of this design is evaluated using the analysis models, and ROCCID guides the user as to the design changes required to satisfy the user's performance and stability goals, including the design of stability aids. Output from ROCCID includes a formatted input file for the standardized JANNAF engine performance prediction procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Temple, Brian Allen; Armstrong, Jerawan Chudoung
This document is a mid-year report on a deliverable for the PYTHON Radiography Analysis Tool (PyRAT) for project LANL12-RS-107J in FY15. The deliverable is deliverable number 2 in the work package and is titled “Add the ability to read in more types of image file formats in PyRAT”. Right now PyRAT can only read in uncompressed TIF files (tiff files). It is planned to expand the file formats that can be read by PyRAT, making it easier to use in more situations. A summary of the file formats added include jpeg, jpg, png and formatted ASCII files.
FlaME: Flash Molecular Editor - a 2D structure input tool for the web
2011-01-01
Background So far, there have been no Flash-based web tools available for chemical structure input. The authors herein present a feasibility study, aiming at the development of a compact and easy-to-use 2D structure editor, using Adobe's Flash technology and its programming language, ActionScript. As a reference model application from the Java world, we selected the Java Molecular Editor (JME). In this feasibility study, we made an attempt to realize a subset of JME's functionality in the Flash Molecular Editor (FlaME) utility. These basic capabilities are: structure input, editing and depiction of single molecules, data import and export in molfile format. Implementation The result of molecular diagram sketching in FlaME is accessible in V2000 molfile format. By integrating the molecular editor into a web page, its communication with the HTML elements on this page is established using the two JavaScript functions, getMol() and setMol(). In addition, structures can be copied to the system clipboard. Conclusion A first attempt was made to create a compact single-file application for 2D molecular structure input/editing on the web, based on Flash technology. With the application examples presented in this article, it could be demonstrated that the Flash methods are principally well-suited to provide the requisite communication between the Flash object (application) and the HTML elements on a web page, using JavaScript functions. PMID:21284863
A two-dimensional graphing program for the Tektronix 4050-series graphics computers
Kipp, K.L.
1983-01-01
A refined, two-dimensional graph-plotting program was developed for use on Tektronix 4050-series graphics computers. Important features of this program include: any combination of logarithmic and linear axes, optional automatic scaling and numbering of the axes, multiple-curve plots, character or drawn symbol-point plotting, optional cartridge-tape data input and plot-format storage, optional spline fitting for smooth curves, and built-in data-editing options. The program is run while the Tektronix is not connected to any large auxiliary computer, although data from files on an auxiliary computer easily can be transferred to data-cartridge for later plotting. The user is led through the plot-construction process by a series of questions and requests for data input. Five example plots are presented to illustrate program capability and the sequence of program operation. (USGS)
GWM-VI: groundwater management with parallel processing for multiple MODFLOW versions
Banta, Edward R.; Ahlfeld, David P.
2013-01-01
Groundwater Management–Version Independent (GWM–VI) is a new version of the Groundwater Management Process of MODFLOW. The Groundwater Management Process couples groundwater-flow simulation with a capability to optimize stresses on the simulated aquifer based on an objective function and constraints imposed on stresses and aquifer state. GWM–VI extends prior versions of Groundwater Management in two significant ways—(1) it can be used with any version of MODFLOW that meets certain requirements on input and output, and (2) it is structured to allow parallel processing of the repeated runs of the MODFLOW model that are required to solve the optimization problem. GWM–VI uses the same input structure for files that describe the management problem as that used by prior versions of Groundwater Management. GWM–VI requires only minor changes to the input files used by the MODFLOW model. GWM–VI uses the Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER-API) to implement both version independence and parallel processing. GWM–VI communicates with the MODFLOW model by manipulating certain input files and interpreting results from the MODFLOW listing file and binary output files. Nearly all capabilities of prior versions of Groundwater Management are available in GWM–VI. GWM–VI has been tested with MODFLOW-2005, MODFLOW-NWT (a Newton formulation for MODFLOW-2005), MF2005-FMP2 (the Farm Process for MODFLOW-2005), SEAWAT, and CFP (Conduit Flow Process for MODFLOW-2005). This report provides sample problems that demonstrate a range of applications of GWM–VI and the directory structure and input information required to use the parallel-processing capability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thoreson, Gregory G
PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.
CBrowse: a SAM/BAM-based contig browser for transcriptome assembly visualization and analysis.
Li, Pei; Ji, Guoli; Dong, Min; Schmidt, Emily; Lenox, Douglas; Chen, Liangliang; Liu, Qi; Liu, Lin; Zhang, Jie; Liang, Chun
2012-09-15
To address the impending need for exploring rapidly increased transcriptomics data generated for non-model organisms, we developed CBrowse, an AJAX-based web browser for visualizing and analyzing transcriptome assemblies and contigs. Designed in a standard three-tier architecture with a data pre-processing pipeline, CBrowse is essentially a Rich Internet Application that offers many seamlessly integrated web interfaces and allows users to navigate, sort, filter, search and visualize data smoothly. The pre-processing pipeline takes the contig sequence file in FASTA format and its relevant SAM/BAM file as the input; detects putative polymorphisms, simple sequence repeats and sequencing errors in contigs and generates image, JSON and database-compatible CSV text files that are directly utilized by different web interfaces. CBowse is a generic visualization and analysis tool that facilitates close examination of assembly quality, genetic polymorphisms, sequence repeats and/or sequencing errors in transcriptome sequencing projects. CBrowse is distributed under the GNU General Public License, available at http://bioinfolab.muohio.edu/CBrowse/ liangc@muohio.edu or liangc.mu@gmail.com; glji@xmu.edu.cn Supplementary data are available at Bioinformatics online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plimpton, Steve; Jones, Matt; Crozier, Paul
2006-01-01
Pizza.py is a loosely integrated collection of tools, many of which provide support for the LAMMPS molecular dynamics and ChemCell cell modeling packages. There are tools to create input files. convert between file formats, process log and dump files, create plots, and visualize and animate simulation snapshots. Software packages that are wrapped by Pizza.py. so they can invoked from within Python, include GnuPlot, MatLab, Raster3d. and RasMol. Pizza.py is written in Python and runs on any platform that supports Python. Pizza.py enhances the standard Python interpreter in a few simple ways. Its tools are Python modules which can be invokedmore » interactively, from scripts, or from GUIs when appropriate. Some of the tools require additional Python packages to be installed as part of the users Python. Others are wrappers on software packages (as listed above) which must be available on the users system. It is easy to modify or extend Pizza.py with new functionality or new tools, which need not have anything to do with LAMMPS or ChemCell.« less
2010-08-07
51 5.3.2 Abaqus VDLOAD Subroutine ............................................. 55 VI. INTERPRETATION OF RESULTS AND DISCUSSION...VDLOAD SUBROUTINE ........................................................... 91 C. PYTHON SCRIPT TO CONVERT ABAQUS INPUT FILE TO LS-DYNA INPUT FILE...all of the simulations, which are the pressures applied from the Abaqus /Explicit VDLOAD subroutine . The entire model 22 including the boundary
BOREAS HYD-8 DEM Data Over the NSA-MSA and SSA-MSA in the UTM Projection
NASA Technical Reports Server (NTRS)
Wang, Xue-Wen; Hall, Forrest G. (Editor); Knapp, David E. (Editor); Band, L. E.; Smith, David E. (Technical Monitor)
2000-01-01
The BOREAS HYD-8 team focused on describing the scaling behavior of water and carbon flux processes at local and regional scales. These DEMs were produced from digitized contours at a cell resolution of 100 meters. Vector contours of the area were used as input to a software package that interpolates between contours to create a DEM representing the terrain surface. The vector contours had a contour interval of 25 feet. The data cover the BOREAS MSAs of the SSA and NSA and are given in a UTM map projection. Most of the elevation data from which the DEM was produced were collected in the 1970s or 1980s. The data are stored in binary, image format files. The data files are available on a CD-ROM (see document number 20010000884) or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).
NASA Technical Reports Server (NTRS)
Hinton, David A.
2001-01-01
A ground-based system has been developed to demonstrate the feasibility of automating the process of collecting relevant weather data, predicting wake vortex behavior from a data base of aircraft, prescribing safe wake vortex spacing criteria, estimating system benefit, and comparing predicted and observed wake vortex behavior. This report describes many of the system algorithms, features, limitations, and lessons learned, as well as suggested system improvements. The system has demonstrated concept feasibility and the potential for airport benefit. Significant opportunities exist however for improved system robustness and optimization. A condensed version of the development lab book is provided along with samples of key input and output file types. This report is intended to document the technical development process and system architecture, and to augment archived internal documents that provide detailed descriptions of software and file formats.
AVE-SESAME program for the REEDA System
NASA Technical Reports Server (NTRS)
Hickey, J. S.
1981-01-01
The REEDA system software was modified and improved to process the AVE-SESAME severe storm data. A random access file system for the AVE storm data was designed, tested, and implemented. The AVE/SESAME software was modified to incorporate the random access file input and to interface with new graphics hardware/software now available on the REEDA system. Software was developed to graphically display the AVE/SESAME data in the convention normally used by severe storm researchers. Software was converted to AVE/SESAME software systems and interfaced with existing graphics hardware/software available on the REEDA System. Software documentation was provided for existing AVE/SESAME programs underlining functional flow charts and interacting questions. All AVE/SESAME data sets in random access format was processed to allow developed software to access the entire AVE/SESAME data base. The existing software was modified to allow for processing of different AVE/SESAME data set types including satellite surface and radar data.
XenoSite server: a web-available site of metabolism prediction tool.
Matlock, Matthew K; Hughes, Tyler B; Swamidass, S Joshua
2015-04-01
Cytochrome P450 enzymes (P450s) are metabolic enzymes that process the majority of FDA-approved, small-molecule drugs. Understanding how these enzymes modify molecule structure is key to the development of safe, effective drugs. XenoSite server is an online implementation of the XenoSite, a recently published computational model for P450 metabolism. XenoSite predicts which atomic sites of a molecule--sites of metabolism (SOMs)--are modified by P450s. XenoSite server accepts input in common chemical file formats including SDF and SMILES and provides tools for visualizing the likelihood that each atomic site is a site of metabolism for a variety of important P450s, as well as a flat file download of SOM predictions. XenoSite server is available at http://swami.wustl.edu/xenosite. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
XIMPOL: a new x-ray polarimetry observation-simulation and analysis framework
NASA Astrophysics Data System (ADS)
Omodei, Nicola; Baldini, Luca; Pesce-Rollins, Melissa; di Lalla, Niccolò
2017-08-01
We present a new simulation framework, XIMPOL, based on the python programming language and the Scipy stack, specifically developed for X-ray polarimetric applications. XIMPOL is not tied to any specific mission or instrument design and is meant to produce fast and yet realistic observation-simulations, given as basic inputs: (i) an arbitrary source model including morphological, temporal, spectral and polarimetric information, and (ii) the response functions of the detector under study, i.e., the effective area, the energy dispersion, the point-spread function and the modulation factor. The format of the response files is OGIP compliant, and the framework has the capability of producing output files that can be directly fed into the standard visualization and analysis tools used by the X-ray community, including XSPEC which make it a useful tool not only for simulating physical systems, but also to develop and test end-to-end analysis chains.
Data files from the Grays Harbor Sediment Transport Experiment Spring 2001
Landerman, Laura A.; Sherwood, Christopher R.; Gelfenbaum, Guy; Lacy, Jessica; Ruggiero, Peter; Wilson, Douglas; Chisholm, Tom; Kurrus, Keith
2005-01-01
This publication consists of two DVD-ROMs, both of which are presented here. This report describes data collected during the Spring 2001 Grays Harbor Sediment Transport Experiment, and provides additional information needed to interpret the data. Two DVDs accompany this report; both contain documentation in html format that assist the user in navigating through the data. DVD-ROM-1 contains a digital version of this report in .pdf format, raw Aquatec acoustic backscatter (ABS) data in .zip format, Sonar data files in .avi format, and coastal processes and morphology data in ASCII format. ASCII data files are provided in .zip format; bundled coastal processes ASCII files are separated by deployment and instrument; bundled morphology ASCII files are separated into monthly data collection efforts containing the beach profiles collected (or extracted from the surface map) at that time; weekly surface maps are also bundled together. DVD-ROM-2 contains a digital version of this report in .pdf format, the binary data files collected by the SonTek instrumentation, calibration files for the pressure sensors, and Matlab m-files for loading the ABS data into Matlab and cleaning-up the optical backscatter (OBS) burst time-series data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, Terrence D.
2017-04-01
This report specifies the electronic file format that was agreed upon to be used as the file format for normalized radiological data produced by the software tool developed under this TI project. The NA-84 Technology Integration (TI) Program project (SNL17-CM-635, Normalizing Radiological Data for Analysis and Integration into Models) investigators held a teleconference on December 7, 2017 to discuss the tasks to be completed under the TI program project. During this teleconference, the TI project investigators determined that the comma-separated values (CSV) file format is the most suitable file format for the normalized radiological data that will be outputted frommore » the normalizing tool developed under this TI project. The CSV file format was selected because it provides the requisite flexibility to manage different types of radiological data (i.e., activity concentration, exposure rate, dose rate) from other sources [e.g., Radiological Assessment and Monitoring System (RAMS), Aerial Measuring System (AMS), Monitoring and Sampling). The CSV file format also is suitable for the file format of the normalized radiological data because this normalized data can then be ingested by other software [e.g., RAMS, Visual Sampling Plan (VSP)] used by the NA-84’s Consequence Management Program.« less
77 FR 59692 - 2014 Diversity Immigrant Visa Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...
Thrust Chamber Modeling Using Navier-Stokes Equations: Code Documentation and Listings. Volume 2
NASA Technical Reports Server (NTRS)
Daley, P. L.; Owens, S. F.
1988-01-01
A copy of the PHOENICS input files and FORTRAN code developed for the modeling of thrust chambers is given. These copies are contained in the Appendices. The listings are contained in Appendices A through E. Appendix A describes the input statements relevant to thrust chamber modeling as well as the FORTRAN code developed for the Satellite program. Appendix B describes the FORTRAN code developed for the Ground program. Appendices C through E contain copies of the Q1 (input) file, the Satellite program, and the Ground program respectively.
1989-12-04
atom (31:708). Also, embedded in the cytochrome-c protein structure, the heme -group is derived from a porphyrin ring system with iron as the centrally...character DEFINT n DIM freqa(401 ),maga(401 ),phasea(401 ),freqg(401 ),magg(401 ),phaseg(401) LET ans$-=n" INPUT ’file with refamp data: ",air$ INPUT...REM and yields envelope of every three points DEFINT n,Ilast, keepers,tposit, po sht DIM frequ(1009),ampu(1009),freqn(1009),ampn(1009) INPUT ’file of
User's guide for a large signal computer model of the helical traveling wave tube
NASA Technical Reports Server (NTRS)
Palmer, Raymond W.
1992-01-01
The use is described of a successful large-signal, two-dimensional (axisymmetric), deformable disk computer model of the helical traveling wave tube amplifier, an extensively revised and operationally simplified version. We also discuss program input and output and the auxiliary files necessary for operation. Included is a sample problem and its input data and output results. Interested parties may now obtain from the author the FORTRAN source code, auxiliary files, and sample input data on a standard floppy diskette, the contents of which are described herein.
NASA Astrophysics Data System (ADS)
Laune, Jordan; Tzeferacos, Petros; Feister, Scott; Fatenejad, Milad; Yurchak, Roman; Flocke, Norbert; Weide, Klaus; Lamb, Donald
2017-10-01
Thermodynamic and opacity properties of materials are necessary to accurately simulate laser-driven laboratory experiments. Such data are compiled in tabular format since the thermodynamic range that needs to be covered cannot be described with one single theoretical model. Moreover, tabulated data can be made available prior to runtime, reducing both compute cost and code complexity. This approach is employed by the FLASH code. Equation of state (EoS) and opacity data comes in various formats, matrix-layouts, and file-structures. We discuss recent developments on opacplot2, an open-source Python module that manipulates tabulated EoS and opacity data. We present software that builds upon opacplot2 and enables easy-to-use conversion of different table formats into the IONMIX format, the native tabular input used by FLASH. Our work enables FLASH users to take advantage of a wider range of accurate EoS and opacity tables in simulating HELP experiments at the National Laser User Facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, N.M.; Ford, W.E. III; Petrie, L.M.
AMPX-77 is a modular system of computer programs that pertain to nuclear analyses, with a primary emphasis on tasks associated with the production and use of multigroup cross sections. AH basic cross-section data are to be input in the formats used by the Evaluated Nuclear Data Files (ENDF/B), and output can be obtained in a variety of formats, including its own internal and very general formats, along with a variety of other useful formats used by major transport, diffusion theory, and Monte Carlo codes. Processing is provided for both neutron and gamma-my data. The present release contains codes all writtenmore » in the FORTRAN-77 dialect of FORTRAN and wig process ENDF/B-V and earlier evaluations, though major modules are being upgraded in order to process ENDF/B-VI and will be released when a complete collection of usable routines is available.« less
MARVIN: a medical research application framework based on open source software.
Rudolph, Tobias; Puls, Marc; Anderegg, Christoph; Ebert, Lars; Broehan, Martina; Rudin, Adrian; Kowal, Jens
2008-08-01
This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.
NASA Astrophysics Data System (ADS)
Bird, Adam; Murphy, Christophe; Dobson, Geoff
2017-09-01
RANKERN 16 is the latest version of the point-kernel gamma radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS Software Service. RANKERN is well established in the UK shielding community for radiation shielding and dosimetry assessments. Many important developments have been made available to users in this latest release of RANKERN. The existing general 3D geometry capability has been extended to include import of CAD files in the IGES format providing efficient full CAD modelling capability without geometric approximation. Import of tetrahedral mesh and polygon surface formats has also been provided. An efficient voxel geometry type has been added suitable for representing CT data. There have been numerous input syntax enhancements and an extended actinide gamma source library. This paper describes some of the new features and compares the performance of the new geometry capabilities.
LANDSAT non-US standard catalog, 1 May 1977 - 31 May 1977
NASA Technical Reports Server (NTRS)
1977-01-01
Information regarding the availability of LANDSAT imagery processed and input to the data files by the NASA Data Processing Facility is published on a monthly basis. The U.S. Standard Catalog includes imagery covering the continental United States, Alaska and Hawaii. the Non-U.S. Standard Catalog identifies all the remaining coverage. Sections 1 and 2 describe the contents and format for the catalogs and associated microfilm. Section 3 provides a cross-reference defining the beginning and ending dates for LANDSAT cycles. Sections 4 and 5 cover LANDSAT-1 and LANDSAT-2 coverage, respectively.
NASA AVOSS Fast-Time Models for Aircraft Wake Prediction: User's Guide (APA3.8 and TDP2.1)
NASA Technical Reports Server (NTRS)
Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew J.; Limon Duparcmeur, Fanny M.
2016-01-01
NASA's current distribution of fast-time wake vortex decay and transport models includes APA (Version 3.8) and TDP (Version 2.1). This User's Guide provides detailed information on the model inputs, file formats, and model outputs. A brief description of the Memphis 1995, Dallas/Fort Worth 1997, and the Denver 2003 wake vortex datasets is given along with the evaluation of models. A detailed bibliography is provided which includes publications on model development, wake field experiment descriptions, and applications of the fast-time wake vortex models.
A framework for visualization of battlefield network behavior
NASA Astrophysics Data System (ADS)
Perzov, Yury; Yurcik, William
2006-05-01
An extensible network simulation application was developed to study wireless battlefield communications. The application monitors node mobility and depicts broadcast and unicast traffic as expanding rings and directed links. The network simulation was specially designed to support fault injection to show the impact of air strikes on disabling nodes. The application takes standard ns-2 trace files as an input and provides for performance data output in different graphical forms (histograms and x/y plots). Network visualization via animation of simulation output can be saved in AVI format that may serve as a basis for a real-time battlefield awareness system.
Implementing an Automated Antenna Measurement System
NASA Technical Reports Server (NTRS)
Valerio, Matthew D.; Romanofsky, Robert R.; VanKeuls, Fred W.
2003-01-01
We developed an automated measurement system using a PC running a LabView application, a Velmex BiSlide X-Y positioner, and a HP85l0C network analyzer. The system provides high positioning accuracy and requires no user supervision. After the user inputs the necessary parameters into the LabView application, LabView controls the motor positioning and performs the data acquisition. Current parameters and measured data are shown on the PC display in two 3-D graphs and updated after every data point is collected. The final output is a formatted data file for later processing.
Simultaneous real-time data collection methods
NASA Technical Reports Server (NTRS)
Klincsek, Thomas
1992-01-01
This paper describes the development of electronic test equipment which executes, supervises, and reports on various tests. This validation process uses computers to analyze test results and report conclusions. The test equipment consists of an electronics component and the data collection and reporting unit. The PC software, display screens, and real-time data-base are described. Pass-fail procedures and data replay are discussed. The OS2 operating system and Presentation Manager user interface system were used to create a highly interactive automated system. The system outputs are hardcopy printouts and MS DOS format files which may be used as input for other PC programs.
DefEX: Hands-On Cyber Defense Exercise for Undergraduate Students
2011-07-01
Injection, and 4) File Upload. Next, the students patched the associated flawed Perl and PHP Hypertext Preprocessor ( PHP ) code. Finally, students...underlying script. The Zora XSS vulnerability existed in a PHP file that echoed unfiltered user input back to the screen. To eliminate the...vulnerability, students filtered the input using the PHP htmlentities function and retested the code. The htmlentities function translates certain ambiguous
ISPE: A knowledge-based system for fluidization studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, S.
1991-01-01
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that canmore » enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.« less
xiSPEC: web-based visualization, analysis and sharing of proteomics data.
Kolbowski, Lars; Combe, Colin; Rappsilber, Juri
2018-05-08
We present xiSPEC, a standard compliant, next-generation web-based spectrum viewer for visualizing, analyzing and sharing mass spectrometry data. Peptide-spectrum matches from standard proteomics and cross-linking experiments are supported. xiSPEC is to date the only browser-based tool supporting the standardized file formats mzML and mzIdentML defined by the proteomics standards initiative. Users can either upload data directly or select files from the PRIDE data repository as input. xiSPEC allows users to save and share their datasets publicly or password protected for providing access to collaborators or readers and reviewers of manuscripts. The identification table features advanced interaction controls and spectra are presented in three interconnected views: (i) annotated mass spectrum, (ii) peptide sequence fragmentation key and (iii) quality control error plots of matched fragments. Highlighting or selecting data points in any view is represented in all other views. Views are interactive scalable vector graphic elements, which can be exported, e.g. for use in publication. xiSPEC allows for re-annotation of spectra for easy hypothesis testing by modifying input data. xiSPEC is freely accessible at http://spectrumviewer.org and the source code is openly available on https://github.com/Rappsilber-Laboratory/xiSPEC.
OpenStereo: Open Source, Cross-Platform Software for Structural Geology Analysis
NASA Astrophysics Data System (ADS)
Grohmann, C. H.; Campanha, G. A.
2010-12-01
Free and open source software (FOSS) are increasingly seen as synonyms of innovation and progress. Freedom to run, copy, distribute, study, change and improve the software (through access to the source code) assure a high level of positive feedback between users and developers, which results in stable, secure and constantly updated systems. Several software packages for structural geology analysis are available to the user, with commercial licenses or that can be downloaded at no cost from the Internet. Some provide basic tools of stereographic projections such as plotting poles, great circles, density contouring, eigenvector analysis, data rotation etc, while others perform more specific tasks, such as paleostress or geotechnical/rock stability analysis. This variety also means a wide range of data formating for input, Graphical User Interface (GUI) design and graphic export format. The majority of packages is built for MS-Windows and even though there are packages for the UNIX-based MacOS, there aren't native packages for *nix (UNIX, Linux, BSD etc) Operating Systems (OS), forcing the users to run these programs with emulators or virtual machines. Those limitations lead us to develop OpenStereo, an open source, cross-platform software for stereographic projections and structural geology. The software is written in Python, a high-level, cross-platform programming language and the GUI is designed with wxPython, which provide a consistent look regardless the OS. Numeric operations (like matrix and linear algebra) are performed with the Numpy module and all graphic capabilities are provided by the Matplolib library, including on-screen plotting and graphic exporting to common desktop formats (emf, eps, ps, pdf, png, svg). Data input is done with simple ASCII text files, with values of dip direction and dip/plunge separated by spaces, tabs or commas. The user can open multiple file at the same time (or the same file more than once), and overlay different elements of each dataset (poles, great circles etc). The GUI shows the opened files in a tree structure, similar to “layers” of many illustration software, where the vertical order of the files in the tree reflects the drawing order of the selected elements. At this stage, the software performs plotting operations of poles to planes, lineations, great circles, density contours and rose diagrams. A set of statistics is calculated for each file and its eigenvalues and eigenvectors are used to suggest if the data is clustered about a mean value or distributed along a girdle. Modified Flinn, Triangular and histograms plots are also available. Next step of development will focus on tools as merging and rotation of datasets, possibility to save 'projects' and paleostress analysis. In its current state, OpenStereo requires Python, wxPython, Numpy and Matplotlib installed in the system. We recommend installing PythonXY or the Enthought Python Distribution on MS-Windows and MacOS machines, since all dependencies are provided. Most Linux distributions provide an easy way to install all dependencies through software repositories. OpenStereo is released under the GNU General Public License. Programmers willing to contribute are encouraged to contact the authors directly. FAPESP Grant #09/17675-5
NAVAIR Portable Source Initiative (NPSI) Standard for Reusable Source Dataset Metadata (RSDM) V2.4
2012-09-26
defining a raster file format: <RasterFileFormat> <FormatName>TIFF</FormatName> <Order>BIP</Order> < DataType >8-BIT_UNSIGNED</ DataType ...interleaved by line (BIL); Band interleaved by pixel (BIP). element RasterFileFormatType/ DataType diagram type restriction of xsd:string facets
Sharing electronic structure and crystallographic data with ETSF_IO
NASA Astrophysics Data System (ADS)
Caliste, D.; Pouillon, Y.; Verstraete, M. J.; Olevano, V.; Gonze, X.
2008-11-01
We present a library of routines whose main goal is to read and write exchangeable files (NetCDF file format) storing electronic structure and crystallographic information. It is based on the specification agreed inside the European Theoretical Spectroscopy Facility (ETSF). Accordingly, this library is nicknamed ETSF_IO. The purpose of this article is to give both an overview of the ETSF_IO library and a closer look at its usage. ETSF_IO is designed to be robust and easy to use, close to Fortran read and write routines. To facilitate its adoption, a complete documentation of the input and output arguments of the routines is available in the package, as well as six tutorials explaining in detail various possible uses of the library routines. Catalogue identifier: AEBG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu Lesser General Public License No. of lines in distributed program, including test data, etc.: 63 156 No. of bytes in distributed program, including test data, etc.: 363 390 Distribution format: tar.gz Programming language: Fortran 95 Computer: All systems with a Fortran95 compiler Operating system: All systems with a Fortran95 compiler Classification: 7.3, 8 External routines: NetCDF, http://www.unidata.ucar.edu/software/netcdf Nature of problem: Store and exchange electronic structure data and crystallographic data independently of the computational platform, language and generating software Solution method: Implement a library based both on NetCDF file format and an open specification (http://etsf.eu/index.php?page=standardization)
75 FR 27335 - Combined Notice of Filings # 1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-14
... Electric Company submits updated market power study. Filed Date: 04/23/2010. Accession Number: 20100427...: ER10-1179-000. Applicants: American Electric Power Service Corporation. Description: Request of American Electric Power Service Corporation to Update Depreciation Expense Inputs in Formula Rate. Filed...
An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files
Chan, Anthony; Gropp, William; Lusk, Ewing
2008-01-01
A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events). These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughlymore » proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage). The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjaardema, Gregory
2010-08-06
Conjoin is a code for joining sequentially in time multiple exodusII database files. It is used to create a single results or restart file from multiple results or restart files which typically arise as the result of multiple restarted analyses. The resulting output file will be the union of the input files with a status variable indicating the status of each element at the various time planes.Combining multiple exodusII files arising from a restarted analysis or combining multiple exodusII files arising from a finite element analysis with dynamic topology changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzardi, M.; Mohr, M.S.; Merrill, D.W.
1992-07-01
In 1990, the United States Bureau of the Census released detailed geographic base files known as TIGER/Line (Topologically Integrated Geographic Encoding and Referencing) which contain detail on the physical features and census tract boundaries of every county in the United States. The TIGER database is attractive for two reasons. First, it is publicly available through the Bureau of the Census on tape or cd-rom for a minimal fee. Second, it contains 24 billion characters of data which describe geographic features of interest to the Census Bureau such as coastlines, hydrography, transportation networks, political boundaries, etc. Unfortunately, the large TIGER databasemore » only provides raw alphanumeric data; no utility software, graphical or otherwise, is included. On the other hand New S, a popular statistical software package by AT T, has easily operated functions that permit advanced graphics in conjunction with data analysis. New S has the ability to plot contours, lines, segments, and points. However, of special interest is the New S function map and its options. Using the map function, which requires polygons as input, census tracts can be quickly selected, plotted, shaded, etc. New S graphics combined with the TIGER database has obvious potential. This paper reports on our efforts to use the TIGER map files with New S, especially to construct census tract maps of counties. While census tract boundaries are inherently polygonal, they are not organized as such in the TIGER database. This conversion of the TIGER line'' format into New S polygon/polyline'' format is one facet of the work reported here. Also we discuss the selection and extraction of auxiliary geographic information from TIGER files for graphical display using New S.« less
Interfacing 1990 US Census TIGER map files with New S graphics software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzardi, M.; Mohr, M.S.; Merrill, D.W.
1992-07-01
In 1990, the United States Bureau of the Census released detailed geographic base files known as TIGER/Line (Topologically Integrated Geographic Encoding and Referencing) which contain detail on the physical features and census tract boundaries of every county in the United States. The TIGER database is attractive for two reasons. First, it is publicly available through the Bureau of the Census on tape or cd-rom for a minimal fee. Second, it contains 24 billion characters of data which describe geographic features of interest to the Census Bureau such as coastlines, hydrography, transportation networks, political boundaries, etc. Unfortunately, the large TIGER databasemore » only provides raw alphanumeric data; no utility software, graphical or otherwise, is included. On the other hand New S, a popular statistical software package by AT&T, has easily operated functions that permit advanced graphics in conjunction with data analysis. New S has the ability to plot contours, lines, segments, and points. However, of special interest is the New S function map and its options. Using the map function, which requires polygons as input, census tracts can be quickly selected, plotted, shaded, etc. New S graphics combined with the TIGER database has obvious potential. This paper reports on our efforts to use the TIGER map files with New S, especially to construct census tract maps of counties. While census tract boundaries are inherently polygonal, they are not organized as such in the TIGER database. This conversion of the TIGER ``line`` format into New S ``polygon/polyline`` format is one facet of the work reported here. Also we discuss the selection and extraction of auxiliary geographic information from TIGER files for graphical display using New S.« less
The Open Spectral Database: an open platform for sharing and searching spectral data.
Chalk, Stuart J
2016-01-01
A number of websites make available spectral data for download (typically as JCAMP-DX text files) and one (ChemSpider) that also allows users to contribute spectral files. As a result, searching and retrieving such spectral data can be time consuming, and difficult to reuse if the data is compressed in the JCAMP-DX file. What is needed is a single resource that allows submission of JCAMP-DX files, export of the raw data in multiple formats, searching based on multiple chemical identifiers, and is open in terms of license and access. To address these issues a new online resource called the Open Spectral Database (OSDB) http://osdb.info/ has been developed and is now available. Built using open source tools, using open code (hosted on GitHub), providing open data, and open to community input about design and functionality, the OSDB is available for anyone to submit spectral data, making it searchable and available to the scientific community. This paper details the concept and coding, internal architecture, export formats, Representational State Transfer (REST) Application Programming Interface and options for submission of data. The OSDB website went live in November 2015. Concurrently, the GitHub repository was made available at https://github.com/stuchalk/OSDB/, and is open for collaborators to join the project, submit issues, and contribute code. The combination of a scripting environment (PHPStorm), a PHP Framework (CakePHP), a relational database (MySQL) and a code repository (GitHub) provides all the capabilities to easily develop REST based websites for ingestion, curation and exposure of open chemical data to the community at all levels. It is hoped this software stack (or equivalent ones in other scripting languages) will be leveraged to make more chemical data available for both humans and computers.
Neo: an object model for handling electrophysiology data in multiple formats
Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L.; Rodgers, Chris C.; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P.
2014-01-01
Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology. PMID:24600386
Neo: an object model for handling electrophysiology data in multiple formats.
Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L; Rodgers, Chris C; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P
2014-01-01
Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named "Neo," suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.
Carle, S.F.; Glen, J.M.; Langenheim, V.E.; Smith, R.B.; Oliver, H.W.
1990-01-01
The report presents the principal facts for gravity stations compiled for Yellowstone National Park and vicinity. The gravity data were compiled from three sources: Defense Mapping Agency, University of Utah, and U.S. Geological Survey. Part A of the report is a paper copy describing how the compilation was done and presenting the data in tabular format as well as a map; part B is a 5-1/4 inch floppy diskette containing only the data files in ASCII format. Requirements for part B: IBM PC or compatible, DOS v. 2.0 or higher. Files contained on this diskette: DOD.ISO -- File containing the principal facts of the 514 gravity stations obtained from the Defense Mapping Agency. The data are in Plouff format* (see file PFTAB.TEX). UTAH.ISO -- File containing the principal facts of 153 gravity stations obtained from the University of Utah. Data are in Plouff format. USGS.ISO -- File containing the principal facts of 27 gravity stations collected by the U.S. Geological Survey in July 1987. Data are in Plouff format. PFTAB.TXT -- File containing explanation of principal fact format. ACC.TXT -- File containing explanation of accuracy codes.
HAL/S-FC and HAL/S-360 compiler system program description
NASA Technical Reports Server (NTRS)
1976-01-01
The compiler is a large multi-phase design and can be broken into four phases: Phase 1 inputs the source language and does a syntactic and semantic analysis generating the source listing, a file of instructions in an internal format (HALMAT) and a collection of tables to be used in subsequent phases. Phase 1.5 massages the code produced by Phase 1, performing machine independent optimization. Phase 2 inputs the HALMAT produced by Phase 1 and outputs machine language object modules in a form suitable for the OS-360 or FCOS linkage editor. Phase 3 produces the SDF tables. The four phases described are written in XPL, a language specifically designed for compiler implementation. In addition to the compiler, there is a large library containing all the routines that can be explicitly called by the source language programmer plus a large collection of routines for implementing various facilities of the language.
Pressure Ratio to Thermal Environments
NASA Technical Reports Server (NTRS)
Lopez, Pedro; Wang, Winston
2012-01-01
A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Silcox, Richard (Technical Monitor)
2001-01-01
A location and positioning system was developed and implemented in the anechoic chamber of the Structural Acoustics Loads and Transmission (SALT) facility to accurately determine the coordinates of points in three-dimensional space. Transfer functions were measured between a shaker source at two different panel locations and the vibrational response distributed over the panel surface using a scanning laser vibrometer. The binaural simulation test matrix included test runs for several locations of the measuring microphones, various attitudes of the mannequin, two locations of the shaker excitation and three different shaker inputs including pulse, broadband random, and pseudo-random. Transfer functions, auto spectra, and coherence functions were acquired for the pseudo-random excitation. Time histories were acquired for the pulse and broadband random input to the shaker. The tests were repeated with a reflective surface installed. Binary data files were converted to universal format and archived on compact disk.
NASA Technical Reports Server (NTRS)
Meyn, Larry A.
2018-01-01
One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use
XML-Based Generator of C++ Code for Integration With GUIs
NASA Technical Reports Server (NTRS)
Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard
2003-01-01
An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.
Avogadro: an advanced semantic chemical editor, visualization, and analysis platform
2012-01-01
Background The Avogadro project has developed an advanced molecule editor and visualizer designed for cross-platform use in computational chemistry, molecular modeling, bioinformatics, materials science, and related areas. It offers flexible, high quality rendering, and a powerful plugin architecture. Typical uses include building molecular structures, formatting input files, and analyzing output of a wide variety of computational chemistry packages. By using the CML file format as its native document type, Avogadro seeks to enhance the semantic accessibility of chemical data types. Results The work presented here details the Avogadro library, which is a framework providing a code library and application programming interface (API) with three-dimensional visualization capabilities; and has direct applications to research and education in the fields of chemistry, physics, materials science, and biology. The Avogadro application provides a rich graphical interface using dynamically loaded plugins through the library itself. The application and library can each be extended by implementing a plugin module in C++ or Python to explore different visualization techniques, build/manipulate molecular structures, and interact with other programs. We describe some example extensions, one which uses a genetic algorithm to find stable crystal structures, and one which interfaces with the PackMol program to create packed, solvated structures for molecular dynamics simulations. The 1.0 release series of Avogadro is the main focus of the results discussed here. Conclusions Avogadro offers a semantic chemical builder and platform for visualization and analysis. For users, it offers an easy-to-use builder, integrated support for downloading from common databases such as PubChem and the Protein Data Bank, extracting chemical data from a wide variety of formats, including computational chemistry output, and native, semantic support for the CML file format. For developers, it can be easily extended via a powerful plugin mechanism to support new features in organic chemistry, inorganic complexes, drug design, materials, biomolecules, and simulations. Avogadro is freely available under an open-source license from http://avogadro.openmolecules.net. PMID:22889332
Avogadro: an advanced semantic chemical editor, visualization, and analysis platform.
Hanwell, Marcus D; Curtis, Donald E; Lonie, David C; Vandermeersch, Tim; Zurek, Eva; Hutchison, Geoffrey R
2012-08-13
The Avogadro project has developed an advanced molecule editor and visualizer designed for cross-platform use in computational chemistry, molecular modeling, bioinformatics, materials science, and related areas. It offers flexible, high quality rendering, and a powerful plugin architecture. Typical uses include building molecular structures, formatting input files, and analyzing output of a wide variety of computational chemistry packages. By using the CML file format as its native document type, Avogadro seeks to enhance the semantic accessibility of chemical data types. The work presented here details the Avogadro library, which is a framework providing a code library and application programming interface (API) with three-dimensional visualization capabilities; and has direct applications to research and education in the fields of chemistry, physics, materials science, and biology. The Avogadro application provides a rich graphical interface using dynamically loaded plugins through the library itself. The application and library can each be extended by implementing a plugin module in C++ or Python to explore different visualization techniques, build/manipulate molecular structures, and interact with other programs. We describe some example extensions, one which uses a genetic algorithm to find stable crystal structures, and one which interfaces with the PackMol program to create packed, solvated structures for molecular dynamics simulations. The 1.0 release series of Avogadro is the main focus of the results discussed here. Avogadro offers a semantic chemical builder and platform for visualization and analysis. For users, it offers an easy-to-use builder, integrated support for downloading from common databases such as PubChem and the Protein Data Bank, extracting chemical data from a wide variety of formats, including computational chemistry output, and native, semantic support for the CML file format. For developers, it can be easily extended via a powerful plugin mechanism to support new features in organic chemistry, inorganic complexes, drug design, materials, biomolecules, and simulations. Avogadro is freely available under an open-source license from http://avogadro.openmolecules.net.
Introducing ADES: A New IAU Astrometry Data Exchange Standard
NASA Astrophysics Data System (ADS)
Chesley, Steven R.; Hockney, George M.; Holman, Matthew J.
2017-10-01
For several decades, small body astrometry has been exchanged, distributed and archived in the form of 80-column ASCII records. As a replacement for this obsolescent format, we have worked with a number of members of the community to develop the Astrometric Data Exchange Standard (ADES), which was formally adopted by IAU Commission 20 in August 2015 at the XXIX General Assembly in Honolulu, Hawaii.The purpose of ADES is to ensure that useful and available observational information is submitted, archived, and disseminated as needed. Availability of more complete information will allow orbit computers to process the data more correctly, leading to improved accuracy and reliability of orbital fits. In this way, it will be possible to fully exploit the improving accuracy and increasing number of both optical and radar observations. ADES overcomes several limitations of the previous format by allowing characterization of astrometric and photometric errors, adequate precision in time and angle fields, and flexibility and extensibility.To accommodate a diverse base of users, from automated surveys to hands-on follow-up observers, the ADES protocol allows for two file formats, eXtensible Markup Language (XML) and Pipe-Separated Values (PSV). Each format carries the same information and simple tools allow users to losslessly transform back and forth between XML and PSV.We have further developed and refined ADES since it was first announced in July 2015 [1]. The proposal at that time [2] has undergone several modest revisions to aid validation and avoid overloaded fields. We now have validation schema and file transformation utilities. Suitable example files, test suites, and input/output libraries in a number of modern programming languages are now available. Acknowledgements: Useful feedback during the development of ADES has been received from numerous colleagues in the community of observers and orbit specialists working on asteroids comets and planetary satellites. References: [1] Chesley, S.R. (2015) M.P.E.C. 2015-O06. [2] http://minorplanetcenter.net/iau/ info/IAU2015_ADES.pdf
Mapping DICOM to OpenDocument format
NASA Astrophysics Data System (ADS)
Yu, Cong; Yao, Zhihong
2009-02-01
In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.
The expected results method for data verification
NASA Astrophysics Data System (ADS)
Monday, Paul
2016-05-01
The credibility of United States Army analytical experiments using distributed simulation depends on the quality of the simulation, the pedigree of the input data, and the appropriateness of the simulation system to the problem. The second of these factors is best met by using classified performance data from the Army Materiel Systems Analysis Activity (AMSAA) for essential battlefield behaviors, like sensors, weapon fire, and damage assessment. Until recently, using classified data has been a time-consuming and expensive endeavor: it requires significant technical expertise to load, and it is difficult to verify that it works correctly. Fortunately, new capabilities, tools, and processes are available that greatly reduce these costs. This paper will discuss these developments, a new method to verify that all of the components are configured and operate properly, and the application to recent Army Capabilities Integration Center (ARCIC) experiments. Recent developments have focused improving the process to load the data. OneSAF has redesigned their input data file formats and structures so that they correspond exactly with the Standard File Format (SFF) defined by AMSAA, ARCIC developed a library of supporting configurations that correlate directly to the AMSAA nomenclature, and the Entity Validation Tool was designed to quickly execute the essential models with a test-jig approach to identify problems with the loaded data. The missing part of the process is provided by the new Expected Results Method. Instead of the usual subjective assessment of quality, e.g., "It looks about right to me", this new approach compares the performance of a combat model with authoritative expectations to quickly verify that the model, data, and simulation are all working correctly. Integrated together, these developments now make it possible to use AMSAA classified performance data with minimal time and maximum assurance that the experiment's analytical results will be of the highest quality possible.
18 CFR 50.3 - Applications/pre-filing; rules and format.
Code of Federal Regulations, 2010 CFR
2010-04-01
... filings must be signed in compliance with § 385.2005 of this chapter. (e) The Commission will conduct a... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Applications/pre-filing... INTERSTATE ELECTRIC TRANSMISSION FACILITIES § 50.3 Applications/pre-filing; rules and format. (a) Filings are...
Arkansas and Louisiana Aeromagnetic and Gravity Maps and Data - A Website for Distribution of Data
Bankey, Viki; Daniels, David L.
2008-01-01
This report contains digital data, image files, and text files describing data formats for aeromagnetic and gravity data used to compile the State aeromagnetic and gravity maps of Arkansas and Louisiana. The digital files include grids, images, ArcInfo, and Geosoft compatible files. In some of the data folders, ASCII files with the extension 'txt' describe the format and contents of the data files. Read the 'txt' files before using the data files.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
Sanders, Michael J.; Markstrom, Steven L.; Regan, R. Steven; Atkinson, R. Dwight
2017-09-15
A module for simulation of daily mean water temperature in a network of stream segments has been developed as an enhancement to the U.S. Geological Survey Precipitation Runoff Modeling System (PRMS). This new module is based on the U.S. Fish and Wildlife Service Stream Network Temperature model, a mechanistic, one-dimensional heat transport model. The new module is integrated in PRMS. Stream-water temperature simulation is activated by selection of the appropriate input flags in the PRMS Control File and by providing the necessary additional inputs in standard PRMS input files.This report includes a comprehensive discussion of the methods relevant to the stream temperature calculations and detailed instructions for model input preparation.
Engineering description of the ascent/descent bet product
NASA Technical Reports Server (NTRS)
Seacord, A. W., II
1986-01-01
The Ascent/Descent output product is produced in the OPIP routine from three files which constitute its input. One of these, OPIP.IN, contains mission specific parameters. Meteorological data, such as atmospheric wind velocities, temperatures, and density, are obtained from the second file, the Corrected Meteorological Data File (METDATA). The third file is the TRJATTDATA file which contains the time-tagged state vectors that combine trajectory information from the Best Estimate of Trajectory (BET) filter, LBRET5, and Best Estimate of Attitude (BEA) derived from IMU telemetry. Each term in the two output data files (BETDATA and the Navigation Block, or NAVBLK) are defined. The description of the BETDATA file includes an outline of the algorithm used to calculate each term. To facilitate describing the algorithms, a nomenclature is defined. The description of the nomenclature includes a definition of the coordinate systems used. The NAVBLK file contains navigation input parameters. Each term in NAVBLK is defined and its source is listed. The production of NAVBLK requires only two computational algorithms. These two algorithms, which compute the terms DELTA and RSUBO, are described. Finally, the distribution of data in the NAVBLK records is listed.
A mass spectrometry proteomics data management platform.
Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael
2012-09-01
Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.
Mass spectrometer output file format mzML.
Deutsch, Eric W
2010-01-01
Mass spectrometry is an important technique for analyzing proteins and other biomolecular compounds in biological samples. Each of the vendors of these mass spectrometers uses a different proprietary binary output file format, which has hindered data sharing and the development of open source software for downstream analysis. The solution has been to develop, with the full participation of academic researchers as well as software and hardware vendors, an open XML-based format for encoding mass spectrometer output files, and then to write software to use this format for archiving, sharing, and processing. This chapter presents the various components and information available for this format, mzML. In addition to the XML schema that defines the file structure, a controlled vocabulary provides clear terms and definitions for the spectral metadata, and a semantic validation rules mapping file allows the mzML semantic validator to insure that an mzML document complies with one of several levels of requirements. Complete documentation and example files insure that the format may be uniformly implemented. At the time of release, there already existed several implementations of the format and vendors have committed to supporting the format in their products.
Intelligent Patching of Conceptual Geometry for CFD Analysis
NASA Technical Reports Server (NTRS)
Li, Wu
2010-01-01
The iPatch computer code for intelligently patching surface grids was developed to convert conceptual geometry to computational fluid dynamics (CFD) geometry (see figure). It automatically uses bicubic B-splines to extrapolate (if necessary) each surface in a conceptual geometry so that all the independently defined geometric components (such as wing and fuselage) can be intersected to form a watertight CFD geometry. The software also computes the intersection curves of surface patches at any resolution (up to 10.4 accuracy) specified by the user, and it writes the B-spline surface patches, and the corresponding boundary points, for the watertight CFD geometry in the format that can be directly used by the grid generation tool VGRID. iPatch requires that input geometry be in PLOT3D format where each component surface is defined by a rectangular grid {(x(i,j), y(i,j), z(i,j)):1less than or equal to i less than or equal to m, 1 less than or equal to j less than or equal to n} that represents a smooth B-spline surface. All surfaces in the PLOT3D file conceptually represent a watertight geometry of components of an aircraft on the half-space y greater than or equal to 0. Overlapping surfaces are not allowed, but could be fixed by a utility code "fixp3d". The fixp3d utility code first finds the two grid lines on the two surface grids that are closest to each other in Hausdorff distance (a metric to measure the discrepancies of two sets); then uses one of the grid lines as the transition line, extending grid lines on one grid to the other grid to form a merged grid. Any two connecting surfaces shall have a "visually" common boundary curve, or can be described by an intersection relationship defined in a geometry specification file. The intersection of two surfaces can be at a conceptual level. However, the intersection is directional (along either i or j index direction), and each intersecting grid line (or its spine extrapolation) on the first surface should intersect the second surface. No two intersection relationships will result in a common intersection point of three surfaces. The output files of iPatch are IGES, d3m, and mapbc files that define the CFD geometry in VGRID format. The IGES file gives the NURBS definition of the outer mold line in the geometry. The d3m file defines how the outer mold line is broken into surface patches whose boundary curves are defined by points. The mapbc file specifies what the boundary condition is on each patch and the corresponding NURBS surface definition of each non-planar patch in the IGES file.
Kernodle, J.M.
1996-01-01
This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.). Output files resulting from the computer simulations are included for reference.
Software system for data management and distributed processing of multichannel biomedical signals.
Franaszczuk, P J; Jouny, C C
2004-01-01
The presented software is designed for efficient utilization of cluster of PC computers for signal analysis of multichannel physiological data. The system consists of three main components: 1) a library of input and output procedures, 2) a database storing additional information about location in a storage system, 3) a user interface for selecting data for analysis, choosing programs for analysis, and distributing computing and output data on cluster nodes. The system allows for processing multichannel time series data in multiple binary formats. The description of data format, channels and time of recording are included in separate text files. Definition and selection of multiple channel montages is possible. Epochs for analysis can be selected both manually and automatically. Implementation of a new signal processing procedures is possible with a minimal programming overhead for the input/output processing and user interface. The number of nodes in cluster used for computations and amount of storage can be changed with no major modification to software. Current implementations include the time-frequency analysis of multiday, multichannel recordings of intracranial EEG of epileptic patients as well as evoked response analyses of repeated cognitive tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Originally developed in 1999, an updated version 8.8.0 with bug fixes was released on September 30th, 2017. EnergyPlus™ is a whole building energy simulation program that engineers, architects, and researchers use to model both energy consumption—for heating, cooling, ventilation, lighting and plug and process loads—and water use in buildings. EnergyPlus is a console-based program that reads input and writes output to text files. It ships with a number of utilities including IDF-Editor for creating input files using a simple spreadsheet-like interface, EP-Launch for managing input and output files and performing batch simulations, and EP-Compare for graphically comparing the results ofmore » two or more simulations. Several comprehensive graphical interfaces for EnergyPlus are also available. DOE does most of its work with EnergyPlus using the OpenStudio® software development kit and suite of applications. DOE releases major updates to EnergyPlus twice annually.« less
Input Files and Procedures for Analysis of SMA Hybrid Composite Beams in MSC.Nastran and ABAQUS
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Patel, Hemant D.
2005-01-01
A thermoelastic constitutive model for shape memory alloys (SMAs) and SMA hybrid composites (SMAHCs) was recently implemented in the commercial codes MSC.Nastran and ABAQUS. The model is implemented and supported within the core of the commercial codes, so no user subroutines or external calculations are necessary. The model and resulting structural analysis has been previously demonstrated and experimentally verified for thermoelastic, vibration and acoustic, and structural shape control applications. The commercial implementations are described in related documents cited in the references, where various results are also shown that validate the commercial implementations relative to a research code. This paper is a companion to those documents in that it provides additional detail on the actual input files and solution procedures and serves as a repository for ASCII text versions of the input files necessary for duplication of the available results.
LTCP 2D Graphical User Interface. Application Description and User's Guide
NASA Technical Reports Server (NTRS)
Ball, Robert; Navaz, Homayun K.
1996-01-01
A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.
Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data
NASA Technical Reports Server (NTRS)
Schairer, Edward T.
2001-01-01
'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.
A program to generate a Fortran interface for a C++ library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Lee
Shroud is a utility to create a Fortran and C interface for a C++ library. An existing C++ library API is described in an input file. Shroud reads the file and creates source files which can be compiled to provide a Fortran API for the library.
Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios
Banta, Edward R.
2014-01-01
Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.
ZOOM Lite: next-generation sequencing data mapping and visualization software
Zhang, Zefeng; Lin, Hao; Ma, Bin
2010-01-01
High-throughput next-generation sequencing technologies pose increasing demands on the efficiency, accuracy and usability of data analysis software. In this article, we present ZOOM Lite, a software for efficient reads mapping and result visualization. With a kernel capable of mapping tens of millions of Illumina or AB SOLiD sequencing reads efficiently and accurately, and an intuitive graphical user interface, ZOOM Lite integrates reads mapping and result visualization into a easy to use pipeline on desktop PC. The software handles both single-end and paired-end reads, and can output both the unique mapping result or the top N mapping results for each read. Additionally, the software takes a variety of input file formats and outputs to several commonly used result formats. The software is freely available at http://bioinfor.com/zoom/lite/. PMID:20530531
The program complex for vocal recognition
NASA Astrophysics Data System (ADS)
Konev, Anton; Kostyuchenko, Evgeny; Yakimuk, Alexey
2017-01-01
This article discusses the possibility of applying the algorithm of determining the pitch frequency for the note recognition problems. Preliminary study of programs-analogues were carried out for programs with function “recognition of the music”. The software package based on the algorithm for pitch frequency calculation was implemented and tested. It was shown that the algorithm allows recognizing the notes in the vocal performance of the user. A single musical instrument, a set of musical instruments, and a human voice humming a tune can be the sound source. The input file is initially presented in the .wav format or is recorded in this format from a microphone. Processing is performed by sequentially determining the pitch frequency and conversion of its values to the note. According to test results, modification of algorithms used in the complex was planned.
Qian, Li Jun; Zhou, Mi; Xu, Jian Rong
2008-07-01
The objective of this article is to explain an easy and effective approach for managing radiologic files in portable document format (PDF) using iTunes. PDF files are widely used as a standard file format for electronic publications as well as for medical online documents. Unfortunately, there is a lack of powerful software to manage numerous PDF documents. In this article, we explain how to use the hidden function of iTunes (Apple Computer) to manage PDF documents as easily as managing music files.
Structural/aerodynamic Blade Analyzer (SAB) User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Morel, M. R.
1994-01-01
The structural/aerodynamic blade (SAB) analyzer provides an automated tool for the static-deflection analysis of turbomachinery blades with aerodynamic and rotational loads. A structural code calculates a deflected blade shape using aerodynamic loads input. An aerodynamic solver computes aerodynamic loads using deflected blade shape input. The two programs are iterated automatically until deflections converge. Currently, SAB version 1.0 is interfaced with MSC/NASTRAN to perform the structural analysis and PROP3D to perform the aerodynamic analysis. This document serves as a guide for the operation of the SAB system with specific emphasis on its use at NASA Lewis Research Center (LeRC). This guide consists of six chapters: an introduction which gives a summary of SAB; SAB's methodology, component files, links, and interfaces; input/output file structure; setup and execution of the SAB files on the Cray computers; hints and tips to advise the user; and an example problem demonstrating the SAB process. In addition, four appendices are presented to define the different computer programs used within the SAB analyzer and describe the required input decks.
Automated system for generation of soil moisture products for agricultural drought assessment
NASA Astrophysics Data System (ADS)
Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically download requisite input parameters like rainfall, Potential Evapotranspiration (PET) from respective servers. It can import file formats like .grd, .hdf, .img, generic binary etc, perform geometric correction and re-project the files to native projection system. The software takes into account the weather, crop and soil parameters to run the designed soil water balance model. The software also has additional features like time compositing of outputs to generate weekly, fortnightly profiles for further analysis. Other tools to generate "Area Favorable for Crop Sowing" using the daily soil moisture with highly customizable parameters interface has been provided. A whole India analysis would now take a mere 20 seconds for generation of soil moisture products which would normally take one hour per day using commercial software.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-23
... recommends not more than 32 characters). DO NOT convert Word files or Excel files into PDF format. Converting... not allow HUD to enter data from the Excel files into a database. DO NOT save your logic model in .xlsm format. If necessary save as an Excel 97-2003 .xls format. Using the .xlsm format can result in a...
BOREAS TE-19 Ecosystem Carbon Balance Model
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Papagno, Andrea (Editor); Frolking, Steve
2000-01-01
The BOREAS TE-19 team developed a model called the Spruce and Moss Model (SPAM) designed to simulate the daily carbon balance of a black spruce/moss boreal forest ecosystem. It is driven by daily weather conditions, and consists of four components: (1) soil climate, (2) tree photosynthesis and respiration, (3) moss photosynthesis and respiration, and (4) litter decomposition and associated heterotrophic respiration. The model simulates tree gross and net photosynthesis, wood respiration, live root respiration, moss gross and net photosynthesis, and heterotrophic respiration (decomposition of root litter, young needle and moss litter, and humus). These values can be combined to generate predictions of total site net ecosystem exchange of carbon (NEE), total soil dark respiration (live roots + heterotrophs + live moss), spruce and moss net productivity, and net carbon accumulation in the soil. To date, simulations have been of the BOREAS NSA-OBS and SSA-OBS tower sites, from 1968-95 (except 1990-93). The files include source code and sample input and output files in ASCII format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).
Web-based document and content management with off-the-shelf software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuster, J
1999-03-18
This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing ofmore » files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format.« less
NASA Technical Reports Server (NTRS)
Hiltner, Dale W.
2000-01-01
The TAILSIM program uses a 4th order Runge-Kutta method to integrate the standard aircraft equations-of-motion (EOM). The EOM determine three translational and three rotational accelerations about the aircraft's body axis reference system. The forces and moments that drive the EOM are determined from aerodynamic coefficients, dynamic derivatives, and control inputs. Values for these terms are determined from linear interpolation of tables that are a function of parameters such as angle-of-attack and surface deflections. Buildup equations combine these terms and dimensionalize them to generate the driving total forces and moments. Features that make TAILSIM applicable to studies of tailplane stall include modeling of the reversible control System, modeling of the pilot performing a load factor and/or airspeed command task, and modeling of vertical gusts. The reversible control system dynamics can be described as two hinged masses connected by a spring. resulting in a fifth order system. The pilot model is a standard form of lead-lag with a time delay applied to an integrated pitch rate and/or airspeed error feedback. The time delay is implemented by a Pade approximation, while the commanded pitch rate is determined by a commanded load factor. Vertical gust inputs include a single 1-cosine gust and a continuous NASA Dryden gust model. These dynamic models. coupled with the use of a nonlinear database, allow the tailplane stall characteristics, elevator response, and resulting aircraft response, to be modeled. A useful output capability of the TAILSIM program is the ability to display multiple post-run plot pages to allow a quick assessment of the time history response. There are 16 plot pages currently available to the user. Each plot page displays 9 parameters. Each parameter can also be displayed individually. on a one plot-per-page format. For a more refined display of the results the program can also create files of tabulated data. which can then be used by other plotting programs. The TAILSIM program was written straightforwardly assuming the user would want to change the database tables, the buildup equations, the output parameters. and the pilot model parameters. A separate database file and input file are automatically read in by the program. The use of an include file to set up all common blocks facilitates easy changing of parameter names and array sizes.
The conical scanner evaluation system design
NASA Technical Reports Server (NTRS)
Cumella, K. E.; Bilanow, S.; Kulikov, I. B.
1982-01-01
The software design for the conical scanner evaluation system is presented. The purpose of this system is to support the performance analysis of the LANDSAT-D conical scanners, which are infrared horizon detection attitude sensors designed for improved accuracy. The system consists of six functionally independent subsystems and five interface data bases. The system structure and interfaces of each of the subsystems is described and the content, format, and file structure of each of the data bases is specified. For each subsystem, the functional logic, the control parameters, the baseline structure, and each of the subroutines are described. The subroutine descriptions include a procedure definition and the input and output parameters.
BOREAS RSS-8 BIOME-BGC Model Simulations at Tower Flux Sites in 1994
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John
2000-01-01
BIOME-BGC is a general ecosystem process model designed to simulate biogeochemical and hydrologic processes across multiple scales (Running and Hunt, 1993). In this investigation, BIOME-BGC was used to estimate daily water and carbon budgets for the BOREAS tower flux sites for 1994. Carbon variables estimated by the model include gross primary production (i.e., net photosynthesis), maintenance and heterotrophic respiration, net primary production, and net ecosystem carbon exchange. Hydrologic variables estimated by the model include snowcover, evaporation, transpiration, evapotranspiration, soil moisture, and outflow. The information provided by the investigation includes input initialization and model output files for various sites in tabular ASCII format.
Interfacing WIPL-D with Mechanical CAD Software
NASA Technical Reports Server (NTRS)
Bliznyuk, Nataliya; Janic, Bojan
2007-01-01
of almost any popular CAD format, e.g. IGES, Parasolid, DXF, ACIS etc. The solid models are processed (simplified) and meshed in GiD(R), and then converted into WIPL-D Pro input file by simple Fortran or Matlab code. This algorithm allows the user to control the mesh of imported geometry, and to assign electric pperties to metalic and dielectric surfaces. Implementation of the algorithm is demonstrated by examples obtained from the NASA Discovery mission, Phoenix Lander 2008. Results for radiation pattern of Phoenix Lander UHF relay antenna with effect of Martian surface, both simulated in WIPL-D Pro and measured, are shown for comparison.
A seamless, high-resolution digital elevation model (DEM) of the north-central California coast
Foxgrover, Amy C.; Barnard, Patrick L.
2012-01-01
A seamless, 2-meter resolution digital elevation model (DEM) of the north-central California coast has been created from the most recent high-resolution bathymetric and topographic datasets available. The DEM extends approximately 150 kilometers along the California coastline, from Half Moon Bay north to Bodega Head. Coverage extends inland to an elevation of +20 meters and offshore to at least the 3 nautical mile limit of state waters. This report describes the procedures of DEM construction, details the input data sources, and provides the DEM for download in both ESRI Arc ASCII and GeoTIFF file formats with accompanying metadata.
A Mass Spectrometry Proteomics Data Management Platform*
Sharma, Vagisha; Eng, Jimmy K.; MacCoss, Michael J.; Riffle, Michael
2012-01-01
Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are “organically” distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/. PMID:22611296
12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...
12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...
12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...
12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...
NASA Technical Reports Server (NTRS)
Hill, S. A.
1994-01-01
BUMPERII is a modular program package employing a numerical solution technique to calculate a spacecraft's probability of no penetration (PNP) from man-made orbital debris or meteoroid impacts. The solution equation used to calculate the PNP is based on the Poisson distribution model for similar analysis of smaller craft, but reflects the more rigorous mathematical modeling of spacecraft geometry, orientation, and impact characteristics necessary for treatment of larger structures such as space station components. The technique considers the spacecraft surface in terms of a series of flat plate elements. It divides the threat environment into a number of finite cases, then evaluates each element of each threat. The code allows for impact shielding (shadowing) of one element by another in various configurations over the spacecraft exterior, and also allows for the effects of changing spacecraft flight orientation and attitude. Four main modules comprise the overall BUMPERII package: GEOMETRY, RESPONSE, SHIELD, and CONTOUR. The GEOMETRY module accepts user-generated finite element model (FEM) representations of the spacecraft geometry and creates geometry databases for both meteoroid and debris analysis. The GEOMETRY module expects input to be in either SUPERTAB Universal File Format or PATRAN Neutral File Format. The RESPONSE module creates wall penetration response databases, one for meteoroid analysis and one for debris analysis, for up to 100 unique wall configurations. This module also creates a file containing critical diameter as a function of impact velocity and impact angle for each wall configuration. The SHIELD module calculates the PNP for the modeled structure given exposure time, operating altitude, element ID ranges, and the data from the RESPONSE and GEOMETRY databases. The results appear in a summary file. SHIELD will also determine the effective area of the components and the overall model, and it can produce a data file containing the probability of penetration values per surface area for each element in the model. The SHIELD module writes this data file in either SUPERTAB Universal File Format or PATRAN Neutral File Format so threat contour plots can be generated as a post-processing feature of the FEM programs SUPERTAB and PATRAN. The CONTOUR module combines the functions of the RESPONSE module and most of the SHIELD module functions allowing determination of ranges of PNP's by looping over ranges of shield and/or wall thicknesses. A data file containing the PNP's for the corresponding shield and vessel wall thickness is produced. Users may perform sensitivity studies of two kinds. The effects of simple variations in orbital time, surface area, and flux may be analyzed by making changes to the terms in the equation representing the average number of penetrating particles per unit time in the PNP solution equation. The package analyzes other changes, including model environment, surface area, and configuration, by re-running the solution sequence with new GEOMETRY and RESPONSE data. BUMPERII can be run with no interactive output to the screen during execution. This can be particularly useful during batch runs. BUMPERII is written in FORTRAN 77 for DEC VAX series computers running under VMS, and was written for use with the finite-element model code SUPERTAB or PATRAN as both a pre-processor and a post-processor. Use of an alternate FEM code will require either development of a translator to change data format or modification of the GEOMETRY subroutine in BUMPERII. This program is available in DEC VAX BACKUP format on a 9-track 1600 BPI magnetic tape (standard distribution media) or on TK50 tape cartridge. The original BUMPER code was developed in 1988 with the BUMPERII revisions following in 1991 and 1992. SUPERTAB is a former name for I-DEAS. I-DEAS Finite Element Modeling is a trademark of Structural Dynamics Research Corporation. DEC, VAX, VMS and TK50 are trademarks of Digital Equipment Corporation.
Description of SHARC-2, the Strategic High-Altitude Atmospheric Radiance Code.
1991-03-22
the Rules for Reaction Cards .. ......... 33 7 Summary of the Rules for Auxiliary Information Cards . 35 8 SHARC CO Molecular States Input File...those used in AARC. The ion pair production rate is then obtained from the energy deposition rate by assuming that 35 eV are required to produce an ion...contain three numbers to identify the particular vibrational state (using the standard AFGL - 35 - Table 8. SHARC CO Molecular States Input File. CO
Damage Tolerance Predictions for Spar Web Cracking in a Diminishing Stress Field
2011-12-01
specimen crack. ....................... 40 28 NASGRO material file inputs for 7075 -T6 aluminum . .................................... 43 29 AFGROW...2024-T3511 aluminum end caps riveted to stiffened 7075 -T6 sheet metal aluminum webs. The cap-to-web attachment consisted of a double row of MS20470D8...section stress constant as the cracks 43 Fig. 28 NASGRO material file inputs for 7075 -T6 aluminum . grow. In this case, cracks are assumed to
1987-01-16
menus , controls user and device access to the system, manages the security features associated with menus , devices, and users, provides...in the files, or the number of files in the system. 2-2 3.0 MODULE INPUT PROCESSES 3.1 Summary of Input Processes The EE module contains many menu ...Output Processes The EE module contains many menu options which enable the user to obtain needed information from the module. These options can be
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-04
... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-63969; File No. SR-BATS-2011-007] Self-Regulatory Organizations; BATS Exchange, Inc.; Notice of Filing and Immediate Effectiveness of Proposed Rule Change by BATS Exchange, Inc. to Adopt BATS Rule 11.21, entitled ``Input of Accurate Information...
DOE Office of Scientific and Technical Information (OSTI.GOV)
HENNIGAN, GARY; SHADID, JOHN; SJAARDEMA, GREGORY
2009-06-08
Nem_spread reads it's input command file (default name nem_spread.inp), takes the named ExodusII geometry definition and spreads out the geometry (and optionally results) contained in that file out to a parallel disk system. The decomposition is taken from a scalar Nemesis load balance file generated by the companion utility nem_slice.
A Java-based tool for creating KML files from GPS waypoints
NASA Astrophysics Data System (ADS)
Kinnicutt, P. G.; Rivard, C.; Rimer, S.
2008-12-01
Google Earth provides a free tool with powerful capabilities for visualizing geoscience images and data. Commercial software tools exist for doing sophisticated digitizing and spatial modeling , but for the purposes of presentation, visualization and overlaying aerial images with data Google Earth provides much of the functionality. Likewise, with current technologies in GPS (Global Positioning System) systems and with Google Earth Plus, it is possible to upload GPS waypoints, tracks and routes directly into Google Earth for visualization. However, older technology GPS units and even low-cost GPS units found today may lack the necessary communications interface to a computer (e.g. no Bluetooth, no WiFi, no USB, no Serial, etc.) or may have an incompatible interface, such as a Serial port but no USB adapter available. In such cases, any waypoints, tracks and routes saved in the GPS unit or recorded in a field notebook must be manually transferred to a computer for use in a GIS system or other program. This presentation describes a Java-based tool developed by the author which enables users to enter GPS coordinates in a user-friendly manner, then save these coordinates in a Keyhole MarkUp Language (KML) file format, for visualization in Google Earth. This tool either accepts user-interactive input or accepts input from a CSV (Comma Separated Value) file, which can be generated from any spreadsheet program. This tool accepts input in the form of lat/long or UTM (Universal Transverse Mercator) coordinates. This presentation describes this system's applicability through several small case studies. This free and lightweight tool simplifies the task of manually inputting GPS data into Google Earth for people working in the field without an automated mechanism for uploading the data; for instance, the user may not have internet connectivity or may not have the proper hardware or software. Since it is a Java application and not a web- based tool, it can be installed on one's field laptop and the GPS data can be manually entered without the need for internet connectivity. This tool provides a table view of the GPS data, but lacks a KML viewer to view the data overlain on top of an aerial view, as this viewer functionality is provided in Google Earth. The tool's primary contribution lies in its more convenient method for entering the GPS data manually when automated technologies are not available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.
2016-04-01
The format of the TSUNAMI-A sensitivity data file produced by SAMS for cases with deterministic transport solutions is given in Table 6.3.A.1. The occurrence of each entry in the data file is followed by an identification of the data contained on each line of the file and the FORTRAN edit descriptor denoting the format of each line. A brief description of each line is also presented. A sample of the TSUNAMI-A data file for the Flattop-25 sample problem is provided in Figure 6.3.A.1. Here, only two profiles out of the 130 computed are shown.
NASA Technical Reports Server (NTRS)
Bingle, Bradford D.; Shea, Anne L.; Hofler, Alicia S.
1993-01-01
Transferable Output ASCII Data (TOAD) computer program (LAR-13755), implements format designed to facilitate transfer of data across communication networks and dissimilar host computer systems. Any data file conforming to TOAD format standard called TOAD file. TOAD Editor is interactive software tool for manipulating contents of TOAD files. Commonly used to extract filtered subsets of data for visualization of results of computation. Also offers such user-oriented features as on-line help, clear English error messages, startup file, macroinstructions defined by user, command history, user variables, UNDO features, and full complement of mathematical statistical, and conversion functions. Companion program, TOAD Gateway (LAR-14484), converts data files from variety of other file formats to that of TOAD. TOAD Editor written in FORTRAN 77.
ISPE: A knowledge-based system for fluidization studies. 1990 Annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, S.
1991-01-01
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all ``specified goals`` are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that canmore » enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.« less
Lefkoff, L.J.; Gorelick, S.M.
1987-01-01
A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)
78 FR 17233 - Notice of Opportunity To File Amicus Briefs
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
.... Any commonly-used word processing format or PDF format is acceptable; text formats are preferable to image formats. Briefs may also be filed with the Office of the Clerk of the Board, Merit Systems...
Vanquelef, Enguerran; Simon, Sabrina; Marquant, Gaelle; Garcia, Elodie; Klimerak, Geoffroy; Delepine, Jean Charles; Cieplak, Piotr; Dupradeau, François-Yves
2011-07-01
R.E.D. Server is a unique, open web service, designed to derive non-polarizable RESP and ESP charges and to build force field libraries for new molecules/molecular fragments. It provides to computational biologists the means to derive rigorously molecular electrostatic potential-based charges embedded in force field libraries that are ready to be used in force field development, charge validation and molecular dynamics simulations. R.E.D. Server interfaces quantum mechanics programs, the RESP program and the latest version of the R.E.D. tools. A two step approach has been developed. The first one consists of preparing P2N file(s) to rigorously define key elements such as atom names, topology and chemical equivalencing needed when building a force field library. Then, P2N files are used to derive RESP or ESP charges embedded in force field libraries in the Tripos mol2 format. In complex cases an entire set of force field libraries or force field topology database is generated. Other features developed in R.E.D. Server include help services, a demonstration, tutorials, frequently asked questions, Jmol-based tools useful to construct PDB input files and parse R.E.D. Server outputs as well as a graphical queuing system allowing any user to check the status of R.E.D. Server jobs.
Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.
2008-01-01
In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.
Data Processing Aspects of MEDLARS
Austin, Charles J.
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287
DATA PROCESSING ASPECTS OF MEDLARS.
AUSTIN, C J
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.
SEGY to ASCII Conversion and Plotting Program 2.0
Goldman, Mark R.
2005-01-01
INTRODUCTION SEGY has long been a standard format for storing seismic data and header information. Almost every seismic processing package can read and write seismic data in SEGY format. In the data processing world, however, ASCII format is the 'universal' standard format. Very few general-purpose plotting or computation programs will accept data in SEGY format. The software presented in this report, referred to as SEGY to ASCII (SAC), converts seismic data written in SEGY format (Barry et al., 1975) to an ASCII data file, and then creates a postscript file of the seismic data using a general plotting package (GMT, Wessel and Smith, 1995). The resulting postscript file may be plotted by any standard postscript plotting program. There are two versions of SAC: one version for plotting a SEGY file that contains a single gather, such as a stacked CDP or migrated section, and a second version for plotting multiple gathers from a SEGY file containing more than one gather, such as a collection of shot gathers. Note that if a SEGY file has multiple gathers, then each gather must have the same number of traces per gather, and each trace must have the same sample interval and number of samples per trace. SAC will read several common standards of SEGY data, including SEGY files with sample values written in either IBM or IEEE floating-point format. In addition, utility programs are present to convert non-standard Seismic Unix (.sux) SEGY files and PASSCAL (.rsy) SEGY files to standard SEGY files. SAC allows complete user control over all plotting parameters including label size and font, tick mark intervals, trace scaling, and the inclusion of a title and descriptive text. SAC shell scripts create a postscript image of the seismic data in vector rather than bitmap format, using GMT's pswiggle command. Although this can produce a very large postscript file, the image quality is generally superior to that of a bitmap image, and commercial programs such as Adobe Illustrator? can manipulate the image more efficiently.
Kubios HRV--heart rate variability analysis software.
Tarvainen, Mika P; Niskanen, Juha-Pekka; Lipponen, Jukka A; Ranta-Aho, Perttu O; Karjalainen, Pasi A
2014-01-01
Kubios HRV is an advanced and easy to use software for heart rate variability (HRV) analysis. The software supports several input data formats for electrocardiogram (ECG) data and beat-to-beat RR interval data. It includes an adaptive QRS detection algorithm and tools for artifact correction, trend removal and analysis sample selection. The software computes all the commonly used time-domain and frequency-domain HRV parameters and several nonlinear parameters. There are several adjustable analysis settings through which the analysis methods can be optimized for different data. The ECG derived respiratory frequency is also computed, which is important for reliable interpretation of the analysis results. The analysis results can be saved as an ASCII text file (easy to import into MS Excel or SPSS), Matlab MAT-file, or as a PDF report. The software is easy to use through its compact graphical user interface. The software is available free of charge for Windows and Linux operating systems at http://kubios.uef.fi. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV
NASA Astrophysics Data System (ADS)
Ho, Y.; Weber, J.
2017-12-01
WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.
A sophisticated cad tool for the creation of complex models for electromagnetic interaction analysis
NASA Astrophysics Data System (ADS)
Dion, Marc; Kashyap, Satish; Louie, Aloisius
1991-06-01
This report describes the essential features of the MS-DOS version of DIDEC-DREO, an interactive program for creating wire grid, surface patch, and cell models of complex structures for electromagnetic interaction analysis. It uses the device-independent graphics library DIGRAF and the graphics kernel system HALO, and can be executed on systems with various graphics devices. Complicated structures can be created by direct alphanumeric keyboard entry, digitization of blueprints, conversion form existing geometric structure files, and merging of simple geometric shapes. A completed DIDEC geometric file may then be converted to the format required for input to a variety of time domain and frequency domain electromagnetic interaction codes. This report gives a detailed description of the program DIDEC-DREO, its installation, and its theoretical background. Each available interactive command is described. The associated program HEDRON which generates simple geometric shapes, and other programs that extract the current amplitude data from electromagnetic interaction code outputs, are also discussed.
DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.
iSpy: a powerful and lightweight event display
NASA Astrophysics Data System (ADS)
Alverson, G.; Eulisse, G.; McCauley, T.; Taylor, L.
2012-12-01
iSpy is a general-purpose event data and detector visualization program that was developed as an event display for the CMS experiment at the LHC and has seen use by the general public and teachers and students in the context of education and outreach. Central to the iSpy design philosophy is ease of installation, use, and extensibility. The application itself uses the open-access packages Qt4 and Open Inventor and is distributed either as a fully-bound executable or a standard installer package: one can simply download and double-click to begin. Mac OSX, Linux, and Windows are supported. iSpy renders the standard 2D, 3D, and tabular views, and the architecture allows for a generic approach to production of new views and projections. iSpy reads and displays data in the ig format: event information is written in compressed JSON format files designed for distribution over a network. This format is easily extensible and makes the iSpy client indifferent to the original input data source. The ig format is the one used for release of approved CMS data to the public.
Tools for Requirements Management: A Comparison of Telelogic DOORS and the HiVe
2006-07-01
types DOORS deals with are text files, spreadsheets, FrameMaker , rich text, Microsoft Word and Microsoft Project. 2.5.1 Predefined file formats DOORS...during the export. DOORS exports FrameMaker files in an incomplete format, meaning DOORS exported files will have to be opened in FrameMaker and saved
76 FR 10405 - Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-24
... file in either the Adobe Portable Document File (PDF) format that contains searchable, accessible text (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., comments may be delivered in hard copy. If hand delivered by a private party, an original [[Page 10406...
Fung, I.
1993-01-01
This directory contains the input files used in simulations of atmospheric CO2 using the GISS 3-D global tracer transport model. The directory contains 16 files including a help file (CO2FUNG.HLP), 12 files containing monthly exchanges with vegetation and soils (CO2VEG.JAN - DEC), 1 file containing releases of CO2 from fossil fuel burning (CO2FOS.MRL), 1 file containing releases of CO2 from land transformations (CO2DEF.HOU), and 1 file containing the patterns of CO2 exchange with the oceans (CO2OCN.TAK).
2012-11-01
interactions in construct: An empirical validation using calibrated grounding. In 2007 BRIMS Conference Proceedings, Norfolk, VA. Simon, H. A...by the path name. Users should ensure that if they have opened any output files (e.g., in Excel to view the files), they should either close the file...stringvars to delimit string variables. Common Gotchas If Construct is unable to open an input file, it will exit and close. There are times when an
Metsalu, Tauno; Vilo, Jaak
2015-01-01
The Principal Component Analysis (PCA) is a widely used method of reducing the dimensionality of high-dimensional data, often followed by visualizing two of the components on the scatterplot. Although widely used, the method is lacking an easy-to-use web interface that scientists with little programming skills could use to make plots of their own data. The same applies to creating heatmaps: it is possible to add conditional formatting for Excel cells to show colored heatmaps, but for more advanced features such as clustering and experimental annotations, more sophisticated analysis tools have to be used. We present a web tool called ClustVis that aims to have an intuitive user interface. Users can upload data from a simple delimited text file that can be created in a spreadsheet program. It is possible to modify data processing methods and the final appearance of the PCA and heatmap plots by using drop-down menus, text boxes, sliders etc. Appropriate defaults are given to reduce the time needed by the user to specify input parameters. As an output, users can download PCA plot and heatmap in one of the preferred file formats. This web server is freely available at http://biit.cs.ut.ee/clustvis/. PMID:25969447
Use of the Hadoop structured storage tools for the ATLAS EventIndex event catalogue
NASA Astrophysics Data System (ADS)
Favareto, A.
2016-09-01
The ATLAS experiment at the LHC collects billions of events each data-taking year, and processes them to make them available for physics analysis in several different formats. An even larger amount of events is in addition simulated according to physics and detector models and then reconstructed and analysed to be compared to real events. The EventIndex is a catalogue of all events in each production stage; it includes for each event a few identification parameters, some basic non-mutable information coming from the online system, and the references to the files that contain the event in each format (plus the internal pointers to the event within each file for quick retrieval). Each EventIndex record is logically simple but the system has to hold many tens of billions of records, all equally important. The Hadoop technology was selected at the start of the EventIndex project development in 2012 and proved to be robust and flexible to accommodate this kind of information; both the insertion and query response times are acceptable for the continuous and automatic operation that started in Spring 2015. This paper describes the EventIndex data input and organisation in Hadoop and explains the operational challenges that were overcome in order to achieve the expected performance.
Visualizing NetCDF Files by Using the EverVIEW Data Viewer
Conzelmann, Craig; Romañach, Stephanie S.
2010-01-01
Over the past few years, modelers in South Florida have started using Network Common Data Form (NetCDF) as the standard data container format for storing hydrologic and ecologic modeling inputs and outputs. With its origins in the meteorological discipline, NetCDF was created by the Unidata Program Center at the University Corporation for Atmospheric Research, in conjunction with the National Aeronautics and Space Administration and other organizations. NetCDF is a portable, scalable, self-describing, binary file format optimized for storing array-based scientific data. Despite attributes which make NetCDF desirable to the modeling community, many natural resource managers have few desktop software packages which can consume NetCDF and unlock the valuable data contained within. The U.S. Geological Survey and the Joint Ecosystem Modeling group, an ecological modeling community of practice, are working to address this need with the EverVIEW Data Viewer. Available for several operating systems, this desktop software currently supports graphical displays of NetCDF data as spatial overlays on a three-dimensional globe and views of grid-cell values in tabular form. An included Open Geospatial Consortium compliant, Web-mapping service client and charting interface allows the user to view Web-available spatial data as additional map overlays and provides simple charting visualizations of NetCDF grid values.
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-21
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
The prevalence of encoded digital trace evidence in the nonfile space of computer media(,) (.).
Garfinkel, Simson L
2014-09-01
Forensically significant digital trace evidence that is frequently present in sectors of digital media not associated with allocated or deleted files. Modern digital forensic tools generally do not decompress such data unless a specific file with a recognized file type is first identified, potentially resulting in missed evidence. Email addresses are encoded differently for different file formats. As a result, trace evidence can be categorized as Plain in File (PF), Encoded in File (EF), Plain Not in File (PNF), or Encoded Not in File (ENF). The tool bulk_extractor finds all of these formats, but other forensic tools do not. A study of 961 storage devices purchased on the secondary market and shows that 474 contained encoded email addresses that were not in files (ENF). Different encoding formats are the result of different application programs that processed different kinds of digital trace evidence. Specific encoding formats explored include BASE64, GZIP, PDF, HIBER, and ZIP. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Journal of Forensic Sciences published by Wiley Periodicals, Inc. on behalf of American Academy of Forensic Sciences.
Building accurate historic and future climate MEPDG input files for Louisiana DOTD : tech summary.
DOT National Transportation Integrated Search
2017-02-01
The new pavement design process (originally MEPDG, then DARWin-ME, and now Pavement ME Design) requires two types : of inputs to infl uence the prediction of pavement distress for a selected set of pavement materials and structure. One input is : tra...
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
The Ada Namelist Package, developed for the Ada programming language, enables a calling program to read and write FORTRAN-style namelist files. A namelist file consists of any number of assignment statements in any order. Features of the Ada Namelist Package are: the handling of any combination of user-defined types; the ability to read vectors, matrices, and slices of vectors and matrices; the handling of mismatches between variables in the namelist file and those in the programmed list of namelist variables; and the ability to avoid searching the entire input file for each variable. The principle user benefits of this software are the following: the ability to write namelist-readable files, the ability to detect most file errors in the initialization phase, a package organization that reduces the number of instantiated units to a few packages rather than to many subprograms, a reduced number of restrictions, and an increased execution speed. The Ada Namelist reads data from an input file into variables declared within a user program. It then writes data from the user program to an output file, printer, or display. The input file contains a sequence of assignment statements in arbitrary order. The output is in namelist-readable form. There is a one-to-one correspondence between namelist I/O statements executed in the user program and variables read or written. Nevertheless, in the input file, mismatches are allowed between assignment statements in the file and the namelist read procedure statements in the user program. The Ada Namelist Package itself is non-generic. However, it has a group of nested generic packages following the nongeneric opening portion. The opening portion declares a variety of useraccessible constants, variables and subprograms. The subprograms are procedures for initializing namelists for reading, reading and writing strings. The subprograms are also functions for analyzing the content of the current dataset and diagnosing errors. Two nested generic packages follow the opening portion. The first generic package contains procedures that read and write objects of scalar type. The second contains subprograms that read and write one and two-dimensional arrays whose components are of scalar type and whose indices are of either of the two discrete types (integer or enumeration). Subprograms in the second package also read and write vector and matrix slices. The Ada Namelist ASCII text files are available on a 360k 5.25" floppy disk written on an IBM PC/AT running under the PC DOS operating system. The largest subprogram in the package requires 150k of memory. The package was developed using VAX Ada v. 1.5 under DEC VMS v. 4.5. It should be portable to any validated Ada compiler. The software was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
DOT National Transportation Integrated Search
2001-02-01
The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...
PDB explorer -- a web based algorithm for protein annotation viewer and 3D visualization.
Nayarisseri, Anuraj; Shardiwal, Rakesh Kumar; Yadav, Mukesh; Kanungo, Neha; Singh, Pooja; Shah, Pratik; Ahmed, Sheaza
2014-12-01
The PDB file format, is a text format characterizing the three dimensional structures of macro molecules available in the Protein Data Bank (PDB). Determined protein structure are found in coalition with other molecules or ions such as nucleic acids, water, ions, Drug molecules and so on, which therefore can be described in the PDB format and have been deposited in PDB database. PDB is a machine generated file, it's not human readable format, to read this file we need any computational tool to understand it. The objective of our present study is to develop a free online software for retrieval, visualization and reading of annotation of a protein 3D structure which is available in PDB database. Main aim is to create PDB file in human readable format, i.e., the information in PDB file is converted in readable sentences. It displays all possible information from a PDB file including 3D structure of that file. Programming languages and scripting languages like Perl, CSS, Javascript, Ajax, and HTML have been used for the development of PDB Explorer. The PDB Explorer directly parses the PDB file, calling methods for parsed element secondary structure element, atoms, coordinates etc. PDB Explorer is freely available at http://www.pdbexplorer.eminentbio.com/home with no requirement of log-in.
NoSQL: collection document and cloud by using a dynamic web query form
NASA Astrophysics Data System (ADS)
Abdalla, Hemn B.; Lin, Jinzhao; Li, Guoquan
2015-07-01
Mongo-DB (from "humongous") is an open-source document database and the leading NoSQL database. A NoSQL (Not Only SQL, next generation databases, being non-relational, deal, open-source and horizontally scalable) presenting a mechanism for storage and retrieval of documents. Previously, we stored and retrieved the data using the SQL queries. Here, we use the MonogoDB that means we are not utilizing the MySQL and SQL queries. Directly importing the documents into our Drives, retrieving the documents on that drive by not applying the SQL queries, using the IO BufferReader and Writer, BufferReader for importing our type of document files to my folder (Drive). For retrieving the document files, the usage is BufferWriter from the particular folder (or) Drive. In this sense, providing the security for those storing files for what purpose means if we store the documents in our local folder means all or views that file and modified that file. So preventing that file, we are furnishing the security. The original document files will be changed to another format like in this paper; Binary format is used. Our documents will be converting to the binary format after that direct storing in one of our folder, that time the storage space will provide the private key for accessing that file. Wherever any user tries to discover the Document files means that file data are in the binary format, the document's file owner simply views that original format using that personal key from receive the secret key from the cloud.
Banta, Edward R.; Provost, Alden M.
2008-01-01
This report documents HUFPrint, a computer program that extracts and displays information about model structure and hydraulic properties from the input data for a model built using the Hydrogeologic-Unit Flow (HUF) Package of the U.S. Geological Survey's MODFLOW program for modeling ground-water flow. HUFPrint reads the HUF Package and other MODFLOW input files, processes the data by hydrogeologic unit and by model layer, and generates text and graphics files useful for visualizing the data or for further processing. For hydrogeologic units, HUFPrint outputs such hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, vertical hydraulic conductivity or anisotropy, specific storage, specific yield, and hydraulic-conductivity depth-dependence coefficient. For model layers, HUFPrint outputs such effective hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, specific storage, primary direction of anisotropy, and vertical conductance. Text files tabulating hydraulic properties by hydrogeologic unit, by model layer, or in a specified vertical section may be generated. Graphics showing two-dimensional cross sections and one-dimensional vertical sections at specified locations also may be generated. HUFPrint reads input files designed for MODFLOW-2000 or MODFLOW-2005.
Glnemo2: Interactive Visualization 3D Program
NASA Astrophysics Data System (ADS)
Lambert, Jean-Charles
2011-10-01
Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sublet, J.-Ch.; Koning, A.J.; Forrest, R.A.
The reasons for the conversion of the European Activation File, EAF into ENDF-6 format are threefold. First, it significantly enhances the JEFF-3.0 release by the addition of an activation file. Second, to considerably increase its usage by using a recognized, official file format, allowing existing plug-in processes to be effective; and third, to move towards a universal nuclear data file in contrast to the current separate general and special-purpose files. The format chosen for the JEFF-3.0/A file uses reaction cross sections (MF-3), cross sections (MF-10), and multiplicities (MF-9). Having the data in ENDF-6 format allows the ENDF suite of utilitiesmore » and checker codes to be used alongside many other utility, visualizing, and processing codes. It is based on the EAF activation file used for many applications from fission to fusion, including dosimetry, inventories, depletion-transmutation, and geophysics. JEFF-3.0/A takes advantage of four generations of EAF files. Extensive benchmarking activities on these files provide feedback and validation with integral measurements. These, in parallel with a detailed graphical analysis based on EXFOR, have been applied stimulating new measurements, significantly increasing the quality of this activation file. The next step is to include the EAF uncertainty data for all channels into JEFF-3.0/A.« less
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.
2011-12-01
Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine; - Open Publishing of all of the workflows used to generate products as machine-callable REST web services, using the capabilities of the SciFlo workflow engine; - Advertising of the metadata (e.g. physical variables provided, space/time bounding box, etc.) for our prepared datasets as "datacasts" using the Atom feed format; - Publishing of all datasets via our "DataDrop" service, which exploits the WebDAV protocol to enable scientists to access remote data directories as local files on their laptops; - Rich "web browse" of the CDR's with full metadata and the provenance trail one click away; - Advertising of all services as Google-discoverable "service casts" using the Atom format. The presentation will describe our use of the interoperable protocols and demonstrate the capabilities and service GUI's.
WT - WIND TUNNEL PERFORMANCE ANALYSIS
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
WT was developed to calculate fan rotor power requirements and output thrust for a closed loop wind tunnel. The program uses blade element theory to calculate aerodynamic forces along the blade using airfoil lift and drag characteristics at an appropriate blade aspect ratio. A tip loss model is also used which reduces the lift coefficient to zero for the outer three percent of the blade radius. The application of momentum theory is not used to determine the axial velocity at the rotor plane. Unlike a propeller, the wind tunnel rotor is prevented from producing an increase in velocity in the slipstream. Instead, velocities at the rotor plane are used as input. Other input for WT includes rotational speed, rotor geometry, and airfoil characteristics. Inputs for rotor blade geometry include blade radius, hub radius, number of blades, and pitch angle. Airfoil aerodynamic inputs include angle at zero lift coefficient, positive stall angle, drag coefficient at zero lift coefficient, and drag coefficient at stall. WT is written in APL2 using IBM's APL2 interpreter for IBM PC series and compatible computers running MS-DOS. WT requires a CGA or better color monitor for display. It also requires 640K of RAM and MS-DOS v3.1 or later for execution. Both an MS-DOS executable and the source code are provided on the distribution medium. The standard distribution medium for WT is a 5.25 inch 360K MS-DOS format diskette in PKZIP format. The utility to unarchive the files, PKUNZIP, is also included. WT was developed in 1991. APL2 and IBM PC are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. PKUNZIP is a registered trademark of PKWare, Inc.
FRS Geospatial Return File Format
The Geospatial Return File Format describes format that needs to be used to submit latitude and longitude coordinates for use in Envirofacts mapping applications. These coordinates are stored in the Geospatail Reference Tables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
Encryption and decryption using FPGA
NASA Astrophysics Data System (ADS)
Nayak, Nikhilesh; Chandak, Akshay; Shah, Nisarg; Karthikeyan, B.
2017-11-01
In this paper, we are performing multiple cryptography methods on a set of data and comparing their outputs. Here AES algorithm and RSA algorithm are used. Using AES Algorithm an 8 bit input (plain text) gets encrypted using a cipher key and the result is displayed on tera term (serially). For simulation a 128 bit input is used and operated with a 128 bit cipher key to generate encrypted text. The reverse operations are then performed to get decrypted text. In RSA Algorithm file handling is used to input plain text. This text is then operated on to get the encrypted and decrypted data, which are then stored in a file. Finally the results of both the algorithms are compared.
Banning standard cell engineering notebook
NASA Technical Reports Server (NTRS)
1976-01-01
A family of standardized thick-oxide P-MOS building blocks (standard cells) is described. The information is presented in a form useful for systems designs, logic design, and the preparation of inputs to both sets of Design Automation programs for array design and analysis. A data sheet is provided for each cell and gives the cell name, the cell number, its logic symbol, Boolean equation, truth table, circuit schematic circuit composite, input-output capacitances, and revision date. The circuit type file, also given for each cell, together with the logic drawing contained on the data sheet provides all the information required to prepare input data files for the Design Automation Systems. A detailed description of the electrical design procedure is included.
SEDIMENT DATA - COMMENCEMENT BAY HYLEBOS WATERWAY - TACOMA, WA - PRE-REMEDIAL DESIGN PROGRAM
Event 1A/1B Data Files URL address: http://www.epa.gov/r10earth/datalib/superfund/hybos1ab.htm. Sediment Chemistry Data (Database Format): HYBOS1AB.EXE is a self-extracting file which expands to the single-value per record .DBF format database file HYBOS1AB.DBF. This file contai...
76 FR 5431 - Released Rates of Motor Common Carriers of Household Goods
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-31
... may be submitted either via the Board's e-filing format or in traditional paper format. Any person using e-filing should attach a document and otherwise comply with the instructions at the E- FILING link on the Board's website at http://www.stb.dot.gov . Any person submitting a filing in the traditional...
75 FR 52054 - Assessment of Mediation and Arbitration Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-24
...: Comments may be submitted either via the Board's e-filing format or in the traditional paper format. Any person using e-filing should attach a document and otherwise comply with the instructions at the E-FILING link on the Board's Web site, at http://www.stb.dot.gov . Any person submitting a filing in the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...
Blade loss transient dynamics analysis. Volume 3: User's manual for TETRA program
NASA Technical Reports Server (NTRS)
Black, G. R.; Gallardo, V. C.; Storace, A. S.; Sagendorph, F.
1981-01-01
The users manual for TETRA contains program logic, flow charts, error messages, input sheets, modeling instructions, option descriptions, input variable descriptions, and demonstration problems. The process of obtaining a NASTRAN 17.5 generated modal input file for TETRA is also described with a worked sample.
Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments.
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-01-05
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-01-01
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. PMID:26745406
Ingargiola, A.; Laurence, T. A.; Boutelle, R.; ...
2015-12-23
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less
OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale
NASA Astrophysics Data System (ADS)
Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason
2015-03-01
The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.
Kernodle, J.M.
1996-01-01
This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.) and revised by Kernodle (Kernodle, J.M., 1998, Simulation of ground-water flow in the Albuquerque Basin, 1901-95, with projections to 2020 (supplement two to U.S. Geological Survey Water-Resources Investigations Report 94-4251): U.S. Geological Survey Open-File Report 96-209, 54 p.). Output files resulting from the computer simulations are included for reference.
NASA Technical Reports Server (NTRS)
1981-01-01
The set of computer programs described allows for data definition, data input, and data transfer between the LSI-11 microcomputers and the VAX-11/780 minicomputer. Program VAXCOM allows for a simple method of textual file transfer from the LSI to the VAX. Program LSICOM allows for easy file transfer from the VAX to the LSI. Program TTY changes the LSI-11 operators console to the LSI's printing device. Program DICTIN provides a means for defining a data set for input to either computer. Program DATAIN is a simple to operate data entry program which is capable of building data files on either machine. Program LEDITV is an extremely powerful, easy to use, line oriented text editor. Program COPYSBF is designed to print out textual files on the line printer without character loss from FORTRAN carriage control or wide record transfer.
Enhancement/upgrade of Engine Structures Technology Best Estimator (EST/BEST) Software System
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2003-01-01
This report describes the work performed during the contract period and the capabilities included in the EST/BEST software system. The developed EST/BEST software system includes the integrated NESSUS, IPACS, COBSTRAN, and ALCCA computer codes required to perform the engine cycle mission and component structural analysis. Also, the interactive input generator for NESSUS, IPACS, and COBSTRAN computer codes have been developed and integrated with the EST/BEST software system. The input generator allows the user to create input from scratch as well as edit existing input files interactively. Since it has been integrated with the EST/BEST software system, it enables the user to modify EST/BEST generated files and perform the analysis to evaluate the benefits. Appendix A gives details of how to use the newly added features in the EST/BEST software system.
PMG: online generation of high-quality molecular pictures and storyboarded animations
Autin, Ludovic; Tufféry, Pierre
2007-01-01
The Protein Movie Generator (PMG) is an online service able to generate high-quality pictures and animations for which one can then define simple storyboards. The PMG can therefore efficiently illustrate concepts such as molecular motion or formation/dissociation of complexes. Emphasis is put on the simplicity of animation generation. Rendering is achieved using Dino coupled to POV-Ray. In order to produce highly informative images, the PMG includes capabilities of using different molecular representations at the same time to highlight particular molecular features. Moreover, sophisticated rendering concepts including scene definition, as well as modeling light and materials are available. The PMG accepts Protein Data Bank (PDB) files as input, which may include series of models or molecular dynamics trajectories and produces images or movies under various formats. PMG can be accessed at http://bioserv.rpbs.jussieu.fr/PMG.html. PMID:17478496
Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S
2018-06-01
Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.
Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng
2018-01-01
Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754
Detection and segmentation of multiple touching product inspection items
NASA Astrophysics Data System (ADS)
Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David
1996-12-01
X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
NASA Astrophysics Data System (ADS)
Greenwood, L. R.; Johnson, C. D.
2016-02-01
The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator workbook for use in correcting the measured activities. Output from the SigPhi Calculator is automatically produced, and consists of a portion of the STAYSL PNNL input file data that is required to run the spectral adjustment calculations. Within STAYSL PNNL, the least-squares process is performed in one step, without iteration, and provides rapid results on PC platforms. STAYSL PNNL creates multiple output files with tabulated results, data suitable for plotting, and data formatted for use in subsequent radiation damage calculations using the SPECTER computer code (which is not included in the STAYSL PNNL suite). All components of the software suite have undergone extensive testing and validation prior to release and test cases are provided with the package.
A Compilation of MATLAB Scripts and Functions for MACGMC Analyses
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Bednarcyk, Brett A.; Mital, Subodh K.
2017-01-01
The primary aim of the current effort is to provide scripts that automate many of the repetitive pre- and post-processing tasks associated with composite materials analyses using the Micromechanics Analysis Code with the Generalized Method of Cells. This document consists of a compilation of hundreds of scripts that were developed in MATLAB (The Mathworks, Inc., Natick, MA) programming language and consolidated into 16 MATLAB functions. (MACGMC). MACGMC is a composite material and laminate analysis software code developed at NASA Glenn Research Center. The software package has been built around the generalized method of cells (GMC) family of micromechanics theories. The computer code is developed with a user-friendly framework, along with a library of local inelastic, damage, and failure models. Further, application of simulated thermo-mechanical loading, generation of output results, and selection of architectures to represent the composite material have been automated to increase the user friendliness, as well as to make it more robust in terms of input preparation and code execution. Finally, classical lamination theory has been implemented within the software, wherein GMC is used to model the composite material response of each ply. Thus, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. The pre-processing tasks include generation of a multitude of different repeating unit cells (RUCs) for CMCs and PMCs, visualization of RUCs from MACGMC input and output files and generation of the RUC section of a MACGMC input file. The post-processing tasks include visualization of the predicted composite response, such as local stress and strain contours, damage initiation and progression, stress-strain behavior, and fatigue response. In addition to the above, several miscellaneous scripts have been developed that can be used to perform repeated Monte-Carlo simulations to enable probabilistic simulations with minimal manual intervention. This document is formatted to provide MATLAB source files and descriptions of how to utilize them. It is assumed that the user has a basic understanding of how MATLAB scripts work and some MATLAB programming experience.
Description, Usage, and Validation of the MVL-15 Modified Vortex Lattice Analysis Capability
NASA Technical Reports Server (NTRS)
Ozoroski, Thomas A.
2015-01-01
MVL-15 is the most recent version of the Modified Vortex-Lattice (MVL) code developed within the Aerodynamics Systems Analysis Branch (ASAB) at NASA LaRC. The term "modified" refers to the primary modification of the core vortex-lattice methodology: inclusion of viscous aerodynamics tables that are linked to the linear solution via iterative processes. The inclusion of the viscous aerodynamics inherently converts the MVL-15 from a purely analytic linearized method to a semi-empirical blend which retains the rapid execution speed of the linearized method while empirically characterizing the section aerodynamics at all spanwise lattice points. The modification provides a means to assess non-linear effects on lift that occur at angles of attack near stall, and provides a means to determine the drag associated with the application of design strategies for lift augmentation such as the use of flaps or blowing. The MVL-15 code is applicable to the analyses of aircraft aerodynamics during cruise, but it is most advantageously applied to the analysis of aircraft operating in various high-lift configurations. The MVL methodology has been previously conceived and implemented; the initial concept version was delivered to the ASAB in 2001 (van Dam, C.), subsequently revised (Gelhausen, P. and Ozoroski, T. 2002 / AVID Inc., Gelhausen, P., and Roberts, M. 2004), and then overhauled (Ozoroski, T., Hahn, A. 2008). The latest version, MVL-15 has been refined to provide analysis transparency and enhanced to meet the analysis requirements of the Environmentally Responsible Aviation (ERA) Project. Each revision has been implemented with reasonable success. Separate applications of the methodology are in use, including a similar in-house capability, developed by Olson, E. that is tailored for structural and acoustics analyses. A central premise of the methodology is that viscous aerodynamic data can be associated with analytic inviscid aerodynamic results at each spanwise wing section, thereby providing a pathway to map viscous data to the inviscid results. However, a number of factors can sidetrack the analysis consistency during various stages of this process. For example, it should be expected that the final airplane lift curve and drag polar results depend strongly on the geometry and aerodynamics of the airfoil section; however, flap deflections and flap chord extensions change the local reference geometry of the input airfoil, the airplane wing, the tabulated non-dimensional viscous aerodynamics, and the spanwise links between the linear and the viscous aerodynamics. These changes also affect the bound circulation and therefore, calculation and integration of the induced angle of attack and induced drag. MVL-15 is configured to ensure these types of challenges are properly addressed. This report is a comprehensive manual describing the theory, use, and validation of the MVL-15 analysis tool. Section 3 summarizes theoretical, procedural, and characteristic features of MVL-15, and includes a list of the files required to setup, execute, and summarize an analysis. Section 4, Section 5, Section 6, and Section 7 combine to comprise the User's Guide portions of this report. The MVL-15 input and output files are described in Section 4 and Section 5, respectively; the descriptions are supplemented with example files and information about the file formats, parameter definitions, and typical parameter values. Section 6 describes the Wing Geometry Setup Utility and the 2d-Variants Utility files that simplify and assist setting up a consistent set of MVL-15 geometry and aerodynamics input parameters and input files. Section 7 describes the use of the 3d-Results Presentation Utility file that can be used to automatically create summary tables and charts from the MVL-15 output files. Section 8 documents the Validation Results of an extensive and varied validation test matrix, including results of an airplane analysis representative of the ERA Program. A start-to-finish example of the airplane analysis procedure is described in Section 7.
NASA Astrophysics Data System (ADS)
Cipolla, Sam J.
2011-11-01
In this new version of ISICS, called ISICS2011, a few omissions and incorrect entries in the built-in file of electron binding energies have been corrected; operational situations leading to un-physical behavior have been identified and flagged. New version program summaryProgram title: ISICS2011 Catalogue identifier: ADDS_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6011 No. of bytes in distributed program, including test data, etc.: 130 587 Distribution format: tar.gz Programming language: C Computer: 80486 or higher-level PCs Operating system: WINDOWS XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v4_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1716. Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: General need for higher precision in output format for projectile energies; some built-in binding energies needed correcting; some anomalous results occur due to faulty read-in data or calculated parameters becoming un-physical; erroneous calculations could result for the L and M shells when restricted K-shell options are inadvertently chosen; to achieve general compatibility with ISICSoo, a companion C++ version that is portable to Linux and MacOS platforms, has been submitted for publication in the CPC Program Library approximately at the same time as this present new standalone version of ISICS [1]. Summary of revisions: The format field for projectile energies in the output has been expanded from two to four decimal places in order to distinguish between closely spaced energy values. There were a few entries in the executable binding energy file that needed correcting; K shell of Eu, M shells of Zn, M1 shell of Kr. The corrected values were also entered in the ENERGY.DAT file. In addition, an alternate data file of binding energies is included, called ENERGY_GW.DAT, which is more up-to-date [2]. Likewise, an alternate atomic parameters data file is now included, called FLOURE_JC.DAT, which is more up-to-date [3] fluorescence yields for the K and L shells and Coster-Kronig parameters for the L shell. Both data files can be read in using the -f usage option. To do this, the original energy file should be renamed and saved (e.g., ENERGY_BB.DAT) and the new file (ENERGY_GW.DAT ) should be duplicated as ENERGY.DAT to be read in using the -f option. Similarly for reading in an alternate FLOURE.DAT file. As with previous versions, the user can also simply input different values of any input quantity by invoking the "specify your own parameters" option from the main menu. You can also use this option to simply check the values of the built-in values of the parameters. If it still happens that a zero binding energy for a particular sub-shell is read in, the program will not completely abort, but will calculate results for the other sub-shells while setting the affected sub-shell output to zero. In calculating the Coulomb deflection factor, if the quantity inside the radical sign of the parameter z z=√{(1} becomes zero or negative, to prevent the program from aborting, the PWBA cross sections are still calculated while the ECPSSR cross sections are set to zero. This situation can happen for very low energy collisions, such as were noticed for helium ions on copper at energies of E⩽11.2 keV. It was observed during the engineering of ISICSoo [1] that erroneous calculations could result for the L- and M-shell cases when restricted K-shell R or HSR scaling options were inappropriately chosen. The program has now been fixed so that these inappropriate options are ignored for the L and M shells. In the previous versions, the usage for inputting a batch data file was incorrectly stated in the Users Manual as -Bxxx; the correct designation is -Fxxx, or alternatively, -Ixxx, as indicated on the usage screen in running the program. A revised Users Manual is also available. Restrictions: The consumed CPU time increases with the atomic shell (K, L, M), but execution is still very fast. Running time: This depends on which shell and the number of different energies to be used in the calculation. The running time is not significantly changed from the previous version.
A Python library for FAIRer access and deposition to the Metabolomics Workbench Data Repository.
Smelter, Andrey; Moseley, Hunter N B
2018-01-01
The Metabolomics Workbench Data Repository is a public repository of mass spectrometry and nuclear magnetic resonance data and metadata derived from a wide variety of metabolomics studies. The data and metadata for each study is deposited, stored, and accessed via files in the domain-specific 'mwTab' flat file format. In order to improve the accessibility, reusability, and interoperability of the data and metadata stored in 'mwTab' formatted files, we implemented a Python library and package. This Python package, named 'mwtab', is a parser for the domain-specific 'mwTab' flat file format, which provides facilities for reading, accessing, and writing 'mwTab' formatted files. Furthermore, the package provides facilities to validate both the format and required metadata elements of a given 'mwTab' formatted file. In order to develop the 'mwtab' package we used the official 'mwTab' format specification. We used Git version control along with Python unit-testing framework as well as continuous integration service to run those tests on multiple versions of Python. Package documentation was developed using sphinx documentation generator. The 'mwtab' package provides both Python programmatic library interfaces and command-line interfaces for reading, writing, and validating 'mwTab' formatted files. Data and associated metadata are stored within Python dictionary- and list-based data structures, enabling straightforward, 'pythonic' access and manipulation of data and metadata. Also, the package provides facilities to convert 'mwTab' files into a JSON formatted equivalent, enabling easy reusability of the data by all modern programming languages that implement JSON parsers. The 'mwtab' package implements its metadata validation functionality based on a pre-defined JSON schema that can be easily specialized for specific types of metabolomics studies. The library also provides a command-line interface for interconversion between 'mwTab' and JSONized formats in raw text and a variety of compressed binary file formats. The 'mwtab' package is an easy-to-use Python package that provides FAIRer utilization of the Metabolomics Workbench Data Repository. The source code is freely available on GitHub and via the Python Package Index. Documentation includes a 'User Guide', 'Tutorial', and 'API Reference'. The GitHub repository also provides 'mwtab' package unit-tests via a continuous integration service.
Accelerating Malware Detection via a Graphics Processing Unit
2010-09-01
Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...NAs02]. These alerts can be costly in terms of time and resources for individuals and organizations to investigate each misidentified file [YWL07] [Vak10
Improvement of Michigan climatic files in pavement ME design.
DOT National Transportation Integrated Search
2015-10-01
Climatic inputs have a great influence on Mechanistic-Empirical design results of flexible : and rigid pavements. Currently the state of Michigan has 24 climatic files embedded in Pavement ME : Design (PMED), but several limitations have been identif...
File concepts for parallel I/O
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1989-01-01
The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.
Smelter, Andrey; Astra, Morgan; Moseley, Hunter N B
2017-03-17
The Biological Magnetic Resonance Data Bank (BMRB) is a public repository of Nuclear Magnetic Resonance (NMR) spectroscopic data of biological macromolecules. It is an important resource for many researchers using NMR to study structural, biophysical, and biochemical properties of biological macromolecules. It is primarily maintained and accessed in a flat file ASCII format known as NMR-STAR. While the format is human readable, the size of most BMRB entries makes computer readability and explicit representation a practical requirement for almost any rigorous systematic analysis. To aid in the use of this public resource, we have developed a package called nmrstarlib in the popular open-source programming language Python. The nmrstarlib's implementation is very efficient, both in design and execution. The library has facilities for reading and writing both NMR-STAR version 2.1 and 3.1 formatted files, parsing them into usable Python dictionary- and list-based data structures, making access and manipulation of the experimental data very natural within Python programs (i.e. "saveframe" and "loop" records represented as individual Python dictionary data structures). Another major advantage of this design is that data stored in original NMR-STAR can be easily converted into its equivalent JavaScript Object Notation (JSON) format, a lightweight data interchange format, facilitating data access and manipulation using Python and any other programming language that implements a JSON parser/generator (i.e., all popular programming languages). We have also developed tools to visualize assigned chemical shift values and to convert between NMR-STAR and JSONized NMR-STAR formatted files. Full API Reference Documentation, User Guide and Tutorial with code examples are also available. We have tested this new library on all current BMRB entries: 100% of all entries are parsed without any errors for both NMR-STAR version 2.1 and version 3.1 formatted files. We also compared our software to three currently available Python libraries for parsing NMR-STAR formatted files: PyStarLib, NMRPyStar, and PyNMRSTAR. The nmrstarlib package is a simple, fast, and efficient library for accessing data from the BMRB. The library provides an intuitive dictionary-based interface with which Python programs can read, edit, and write NMR-STAR formatted files and their equivalent JSONized NMR-STAR files. The nmrstarlib package can be used as a library for accessing and manipulating data stored in NMR-STAR files and as a command-line tool to convert from NMR-STAR file format into its equivalent JSON file format and vice versa, and to visualize chemical shift values. Furthermore, the nmrstarlib implementation provides a guide for effectively JSONizing other older scientific formats, improving the FAIRness of data in these formats.
MF2KtoMF05UC, a Program To Convert MODFLOW-2000 Files to MODFLOW-2005 and UCODE_2005 Files
Harbaugh, Arlen W.
2007-01-01
The program MF2KtoMF05UC has been developed to convert MODFLOW-2000 input files for use by MODFLOW-2005 and UCODE_2005. MF2KtoMF05UC was written in the Fortran 90 computer language. This report documents the use of MF2KtoMF05UC.
Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1999-01-01
The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.
Structural tailoring of advanced turboprops (STAT): User's manual
NASA Technical Reports Server (NTRS)
Brown, K. W.
1991-01-01
This user's manual describes the Structural Tailoring of Advanced Turboprops program. It contains instructions to prepare the input for optimization, blade geometry and analysis, geometry generation, and finite element program control. In addition, a sample input file is provided as well as a section describing special applications (i.e., non-standard input).
Advanced Technology Multiple Criteria Decision Model.
1981-11-01
ratings of the sys- tem parameters; and (3), HEADER which contains information on the structure of the problem and titles. Two supporting programs develop...in these files are given in Section V.2. 2. DATA STRUCTURE TABLES This section describes the data files used in the system selection model program ...the supporting program PPP and an input file to UPPP and SSMP. Figure 13 shows the structure of this file. b. User’s preference package (UPP) UPP is
GPU.proton.DOCK: Genuine Protein Ultrafast proton equilibria consistent DOCKing.
Kantardjiev, Alexander A
2011-07-01
GPU.proton.DOCK (Genuine Protein Ultrafast proton equilibria consistent DOCKing) is a state of the art service for in silico prediction of protein-protein interactions via rigorous and ultrafast docking code. It is unique in providing stringent account of electrostatic interactions self-consistency and proton equilibria mutual effects of docking partners. GPU.proton.DOCK is the first server offering such a crucial supplement to protein docking algorithms--a step toward more reliable and high accuracy docking results. The code (especially the Fast Fourier Transform bottleneck and electrostatic fields computation) is parallelized to run on a GPU supercomputer. The high performance will be of use for large-scale structural bioinformatics and systems biology projects, thus bridging physics of the interactions with analysis of molecular networks. We propose workflows for exploring in silico charge mutagenesis effects. Special emphasis is given to the interface-intuitive and user-friendly. The input is comprised of the atomic coordinate files in PDB format. The advanced user is provided with a special input section for addition of non-polypeptide charges, extra ionogenic groups with intrinsic pK(a) values or fixed ions. The output is comprised of docked complexes in PDB format as well as interactive visualization in a molecular viewer. GPU.proton.DOCK server can be accessed at http://gpudock.orgchm.bas.bg/.
Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitus, B.R.; Goddard, J.S.; Jatko, W.B.
1993-06-01
The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less
A computer program for the generation of logic networks from task chart data
NASA Technical Reports Server (NTRS)
Herbert, H. E.
1980-01-01
The Network Generation Program (NETGEN), which creates logic networks from task chart data is presented. NETGEN is written in CDC FORTRAN IV (Extended) and runs in a batch mode on the CDC 6000 and CYBER 170 series computers. Data is input via a two-card format and contains information regarding the specific tasks in a project. From this data, NETGEN constructs a logic network of related activities with each activity having unique predecessor and successor nodes, activity duration, descriptions, etc. NETGEN then prepares this data on two files that can be used in the Project Planning Analysis and Reporting System Batch Network Scheduling program and the EZPERT graphics program.
Burn, K W; Daffara, C; Gualdrini, G; Pierantoni, M; Ferrari, P
2007-01-01
The question of Monte Carlo simulation of radiation transport in voxel geometries is addressed. Patched versions of the MCNP and MCNPX codes are developed aimed at transporting radiation both in the standard geometry mode and in the voxel geometry treatment. The patched code reads an unformatted FORTRAN file derived from DICOM format data and uses special subroutines to handle voxel-to-voxel radiation transport. The various phases of the development of the methodology are discussed together with the new input options. Examples are given of employment of the code in internal and external dosimetry and comparisons with results from other groups are reported.
VAGUE: a graphical user interface for the Velvet assembler.
Powell, David R; Seemann, Torsten
2013-01-15
Velvet is a popular open-source de novo genome assembly software tool, which is run from the Unix command line. Most of the problems experienced by new users of Velvet revolve around constructing syntactically and semantically correct command lines, getting input files into acceptable formats and assessing the output. Here, we present Velvet Assembler Graphical User Environment (VAGUE), a multi-platform graphical front-end for Velvet. VAGUE aims to make sequence assembly accessible to a wider audience and to facilitate better usage amongst existing users of Velvet. VAGUE is implemented in JRuby and targets the Java Virtual Machine. It is available under an open-source GPLv2 licence from http://www.vicbioinformatics.com/. torsten.seemann@monash.edu.
A GUI visualization system for airborne lidar image data to reconstruct 3D city model
NASA Astrophysics Data System (ADS)
Kawata, Yoshiyuki; Koizumi, Kohei
2015-10-01
A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.
Digital geologic map of the Butler Peak 7.5' quadrangle, San Bernardino County, California
Miller, Fred K.; Matti, Jonathan C.; Brown, Howard J.; digital preparation by Cossette, P. M.
2000-01-01
Open-File Report 00-145, is a digital geologic map database of the Butler Peak 7.5' quadrangle that includes (1) ARC/INFO (Environmental Systems Research Institute) version 7.2.1 Patch 1 coverages, and associated tables, (2) a Portable Document Format (.pdf) file of the Description of Map Units, Correlation of Map Units chart, and an explanation of symbols used on the map, btlrpk_dcmu.pdf, (3) a Portable Document Format file of this Readme, btlrpk_rme.pdf (the Readme is also included as an ascii file in the data package), and (4) a PostScript plot file of the map, Correlation of Map Units, and Description of Map Units on a single sheet, btlrpk.ps. No paper map is included in the Open-File report, but the PostScript plot file (number 4 above) can be used to produce one. The PostScript plot file generates a map, peripheral text, and diagrams in the editorial format of USGS Geologic Investigation Series (I-series) maps.
MXA: a customizable HDF5-based data format for multi-dimensional data sets
NASA Astrophysics Data System (ADS)
Jackson, M.; Simmons, J. P.; De Graef, M.
2010-09-01
A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-14
RolX takes the features from Re-FeX or any other feature matrix as input and outputs role assignments (clusters). The output of RolX is a csv file containing the node-role memberships and a csv file containing the role-feature definitions.
Ecposure Related Dose Estimating Model
ERDEM is a physiologically based pharmacokinetic (PBPK) modeling system consisting of a general model and an associated front end. An actual model is defined when the user prepares an input command file. Such a command file defines the chemicals, compartments and processes that...
PAnalyzer: a software tool for protein inference in shotgun proteomics.
Prieto, Gorka; Aloria, Kerman; Osinalde, Nerea; Fullaondo, Asier; Arizmendi, Jesus M; Matthiesen, Rune
2012-11-05
Protein inference from peptide identifications in shotgun proteomics must deal with ambiguities that arise due to the presence of peptides shared between different proteins, which is common in higher eukaryotes. Recently data independent acquisition (DIA) approaches have emerged as an alternative to the traditional data dependent acquisition (DDA) in shotgun proteomics experiments. MSE is the term used to name one of the DIA approaches used in QTOF instruments. MSE data require specialized software to process acquired spectra and to perform peptide and protein identifications. However the software available at the moment does not group the identified proteins in a transparent way by taking into account peptide evidence categories. Furthermore the inspection, comparison and report of the obtained results require tedious manual intervention. Here we report a software tool to address these limitations for MSE data. In this paper we present PAnalyzer, a software tool focused on the protein inference process of shotgun proteomics. Our approach considers all the identified proteins and groups them when necessary indicating their confidence using different evidence categories. PAnalyzer can read protein identification files in the XML output format of the ProteinLynx Global Server (PLGS) software provided by Waters Corporation for their MSE data, and also in the mzIdentML format recently standardized by HUPO-PSI. Multiple files can also be read simultaneously and are considered as technical replicates. Results are saved to CSV, HTML and mzIdentML (in the case of a single mzIdentML input file) files. An MSE analysis of a real sample is presented to compare the results of PAnalyzer and ProteinLynx Global Server. We present a software tool to deal with the ambiguities that arise in the protein inference process. Key contributions are support for MSE data analysis by ProteinLynx Global Server and technical replicates integration. PAnalyzer is an easy to use multiplatform and free software tool.
PAnalyzer: A software tool for protein inference in shotgun proteomics
2012-01-01
Background Protein inference from peptide identifications in shotgun proteomics must deal with ambiguities that arise due to the presence of peptides shared between different proteins, which is common in higher eukaryotes. Recently data independent acquisition (DIA) approaches have emerged as an alternative to the traditional data dependent acquisition (DDA) in shotgun proteomics experiments. MSE is the term used to name one of the DIA approaches used in QTOF instruments. MSE data require specialized software to process acquired spectra and to perform peptide and protein identifications. However the software available at the moment does not group the identified proteins in a transparent way by taking into account peptide evidence categories. Furthermore the inspection, comparison and report of the obtained results require tedious manual intervention. Here we report a software tool to address these limitations for MSE data. Results In this paper we present PAnalyzer, a software tool focused on the protein inference process of shotgun proteomics. Our approach considers all the identified proteins and groups them when necessary indicating their confidence using different evidence categories. PAnalyzer can read protein identification files in the XML output format of the ProteinLynx Global Server (PLGS) software provided by Waters Corporation for their MSE data, and also in the mzIdentML format recently standardized by HUPO-PSI. Multiple files can also be read simultaneously and are considered as technical replicates. Results are saved to CSV, HTML and mzIdentML (in the case of a single mzIdentML input file) files. An MSE analysis of a real sample is presented to compare the results of PAnalyzer and ProteinLynx Global Server. Conclusions We present a software tool to deal with the ambiguities that arise in the protein inference process. Key contributions are support for MSE data analysis by ProteinLynx Global Server and technical replicates integration. PAnalyzer is an easy to use multiplatform and free software tool. PMID:23126499
NASA Technical Reports Server (NTRS)
Long, D.
1994-01-01
This library is a set of subroutines designed for vector plotting to CRT's, plotters, dot matrix, and laser printers. LONGLIB subroutines are invoked by program calls similar to standard CALCOMP routines. In addition to the basic plotting routines, LONGLIB contains an extensive set of routines to allow viewport clipping, extended character sets, graphic input, shading, polar plots, and 3-D plotting with or without hidden line removal. LONGLIB capabilities include surface plots, contours, histograms, logarithm axes, world maps, and seismic plots. LONGLIB includes master subroutines, which are self-contained series of commonly used individual subroutines. When invoked, the master routine will initialize the plotting package, and will plot multiple curves, scatter plots, log plots, 3-D plots, etc. and then close the plot package, all with a single call. Supported devices include VT100 equipped with Selanar GR100 or GR100+ boards, VT125s, VT240s, VT220 equipped with Selanar SG220, Tektronix 4010/4014 or 4107/4109 and compatibles, and Graphon GO-235 terminals. Dot matrix printer output is available by using the provided raster scan conversion routines for DEC LA50, Printronix printers, and high or low resolution Trilog printers. Other output devices include QMS laser printers, Postscript compatible laser printers, and HPGL compatible plotters. The LONGLIB package includes the graphics library source code, an on-line help library, scan converter and meta file conversion programs, and command files for installing, creating, and testing the library. The latest version, 5.0, is significantly enhanced and has been made more portable. Also, the new version's meta file format has been changed and is incompatible with previous versions. A conversion utility is included to port the old meta files to the new format. Color terminal plotting has been incorporated. LONGLIB is written in FORTRAN 77 for batch or interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985, and last updated in 1988.
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.
2017-12-01
Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications. In this poster, we summarize the key components of the UCVM framework and describe the impact it has had in various computational geoscientific applications.
User Requirements Analyzer (URA) User’s Manual H6180/Multics/Version 3.3.
1978-07-01
4.3 Enterinq Data Into An Input File 11 4.4 Using NAME- GEN 11 4.5 Using PUNCH Files 12 5. Receiving Output From URA Commands 12 5.1 The...CONSISTENCY RBPOPT 2fc^ KWIC INDEX 27? LIST-CHANGES Report 276 NAME- GEN 28C NAME LIST 29? PICTURE 313 PROCESS CHAIN...428 FREQUENCY U32 HELP U33 INPUT-PSL 434 INTERVAL-CONSISTENCY 435 KWIC 436 LIST-CtlANGES 437 N Af.E- GEN 4 38 NAf’B-LIST
Anisoft - Advanced Treatment of Magnetic Anisotropy Data
NASA Astrophysics Data System (ADS)
Chadima, M.
2017-12-01
Since its first release, Anisoft (Anisotropy Data Browser) has gained a wide popularity in magnetic fabric community mainly due to its simple and user-friendly interface enabling very fast visualization of magnetic anisotropy tensors. Here, a major Anisoft update is presented transforming a rather simple data viewer into a platform offering an advanced treatment of magnetic anisotropy data. The updated software introduces new enlarged binary data format which stores both in-phase and out-of-phase (if measured) susceptibility tensors (AMS) or tensors of anisotropy of magnetic remanence (AMR) together with their respective confidence ellipses and values of F-tests for anisotropy. In addition to the tensor data, a whole array of specimen orientation angles, orientation of mesoscopic foliation(s) and lineation(s) is stored for each record enabling later editing or corrections. The input data may be directly acquired by AGICO Kappabridges (AMS) or Spinner Magnetometers (AMR); imported from various data formats, including the long-time standard binary ran-format; or manually created. Multiple anisotropy files can be combined together or split into several files by manual data selection or data filtering according to their values. Anisotropy tensors are conventionally visualized as principal directions (eigenvectors) in equal-area projection (stereoplot) together with a wide array of quantitative anisotropy parameters presented in histograms or in color-coded scatter plots showing mutual relationship of up to three quantitative parameters. When dealing with AMS in variable low fields, field-independent and field-dependent components of anisotropy can be determined (Hrouda 2009). For a group of specimens, individual principal directions can be contoured, or a mean tensor and respective confidence ellipses of its principal directions can be calculated using either the Hext-Jelinek (Jelinek 1978) statistics or the Bootstrap method (Constable & Tauxe 1990). Each graphical output can be exported into several vector or raster graphical formats or, via clipboard, pasted directly into a presentation or publication manuscript. Calculated principal directions or anisotropy parameters can be exported into various types of text files ready to be visualized or processed by any software of user's choice.
NASA Astrophysics Data System (ADS)
Northup, E. A.; Kusterer, J.; Quam, B.; Chen, G.; Early, A. B.; Beach, A. L., III
2015-12-01
The current ICARTT file format standards were developed for the purpose of fulfilling the data management needs for the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) campaign in 2004. The goal of the ICARTT file format was to establish a common and simple to use data file format to promote data exchange and collaboration among science teams with similar science objectives. ICARTT has been the NASA standard since 2010, and is widely used by NOAA, NSF, and international partners (DLR, FAAM). Despite its level of acceptance, there are a number of issues with the current ICARTT format, especially concerning the machine readability. To enhance usability, the ICARTT Refresh Earth Science Data Systems Working Group (ESDSWG) was established to enable a platform for atmospheric science data producers, users (e.g. modelers) and data managers to collaborate on developing criteria for this file format. Ultimately, this is a cross agency effort to improve and aggregate the metadata records being produced. After conducting a survey to identify deficiencies in the current format, we determined which are considered most important to the various communities. Numerous recommendations were made to improve upon the file format while maintaining backward compatibility. The recommendations made to date and their advantages and limitations will be discussed.
NASA Astrophysics Data System (ADS)
Charrier, Michel; Everett, Daniel; Fieret, Jim; Karrer, Tobias; Rau, Sven; Valard, Jean-Luc
2001-06-01
A novel method is presented to produce a high precision pattern of copper tracks on both sides of a 4-layer conformal radar antenna made of PEI polymer and shaped as a truncated pseudo-parabolic cylinder. The antenna is an active emitter-receiver so that an accuracy of a fraction of the wavelength of the microwave radiation is required. After 2D layer design in Allegro, the resulting Gerber file-format circuits are wrapped around the antenna shape, resulting in a cutter-path file which provides the input for a postprocessor that outputs G-code for robot- and laser control. A rules file contains embedded information such as laser parameters and mask aperture related to the Allegro symbols. The robot consists of 6 axes that manipulate the antenna, and 2 axes for the mask plate. The antenna can be manipulated to an accuracy of +/- 20 micrometers over its full dimensions of 200x300x50 mm. The four layers are constructed by successive copper coating, resist coating, laser ablation, copper etching, resist removal, insulation polyimide film lamination and laser dielectric drilling for microvia holes and through-holes drilling. Applications are in space and aeronautical communication and radar detection systems, with possible extensions to automotive and mobile hand-sets, and land stations.
NASA Standard for Airborne Data: ICARTT Format ESDS-RFC-019
NASA Astrophysics Data System (ADS)
Thornhill, A.; Brown, C.; Aknan, A.; Crawford, J. H.; Chen, G.; Williams, E. J.
2011-12-01
Airborne field studies generate a plethora of data products in the effort to study atmospheric composition and processes. Data file formats for airborne field campaigns are designed to present data in an understandable and organized way to support collaboration and to document relevant and important meta data. The ICARTT file format was created to facilitate data management during the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) campaign in 2004 that involved government-agencies and university participants from five countries. Since this mission the ICARTT format has been used in subsequent field campaigns such as Polar Study Using Aircraft Remote Sensing, Surface Measurements and Models of Climates, Chemistry, Aerosols, and Transport (POLARCAT) and the first phase of Deriving Information on Surface Conditions from COlumn and VERtically Resolved Observations Relevant to Air Quality (DISCOVER-AQ). The ICARTT file format has been endorsed as a standard format for airborne data by the Standard Process Group (SPG), one of the Earth Science Data Systems Working Groups (ESDSWG) in 2010. The detailed description of the ICARTT format can be found at http://www-air.larc.nasa.gov/missions/etc/ESDS-RFC-019-v1.00.pdf. The ICARTT data format is an ASCII, comma delimited format that was based on the NASA Ames and GTE file formats. The file header is detailed enough to fully describe the data for users outside of the instrument group and includes a description of the meta data. The ICARTT scanning tools, format structure, implementations, and examples will be presented.
In addition to standard HTML webpages, our website contains files in other formats. You may need additional software or browser plug-ins to view some of these files. The following list shows each format along with links to the corresponding freely available plug-ins or viewers. Documents Adobe Acrobat Reader (.pdf)
Dependency Tree Annotation Software
2015-11-01
formats, and it provides numerous options for customizing how dependency trees are displayed. Built entirely in Java , it can run on a wide range of...tree can be saved as an image, .mxe (a mxGraph editing file), a .conll file, and several other file formats. DTE uses the open source Java version
Representation of thermal infrared imaging data in the DICOM using XML configuration files.
Ruminski, Jacek
2007-01-01
The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.
Runwien: a text-based interface for the WIEN package
NASA Astrophysics Data System (ADS)
Otero de la Roza, A.; Luaña, Víctor
2009-05-01
A new text-based interface for WIEN2k, the full-potential linearized augmented plane-waves (FPLAPW) program, is presented. This code provides an easy to use, yet powerful way of generating arbitrarily large sets of calculations. Thus, properties over a potential energy surface and WIEN2k parameter exploration can be calculated using a simple input text file. This interface also provides new capabilities to the WIEN2k package, such as the calculation of elastic constants on hexagonal systems or the automatic gathering of relevant information. Additionally, runwien is modular, flexible and intuitive. Program summaryProgram title: runwien Catalogue identifier: AECM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL version 3 No. of lines in distributed program, including test data, etc.: 62 567 No. of bytes in distributed program, including test data, etc.: 610 973 Distribution format: tar.gz Programming language: gawk (with locale POSIX or similar) Computer: All running Unix, Linux Operating system: Unix, GNU/Linux Classification: 7.3 External routines: WIEN2k ( http://www.wien2k.at/), GAWK ( http://www.gnu.org/software/gawk/), rename by L. Wall, a Perl script which renames files, modified by R. Barker to check for the existence of target files, gnuplot ( http://www.gnuplot.info/) Subprograms used:Cat Id: ADSY_v1_0/AECB_v1_0, Title: GIBBS/CRITIC, Reference: CPC 158 (2004) 57/CPC 999 (2009) 999 Nature of problem: Creation of a text-based, batch-oriented interface for the WIEN2k package. Solution method: WIEN2k solves the Kohn-Sham equations of a solid using the FPLAPW formalism. Runwien interprets an input file containing the description of the geometry and structure of the solid and drives the execution of the WIEN2k programs. The input is simplified thanks to the default values of the WIEN2k parameters known to runwien. Additional comments: Designed for WIEN2k versions 06.4, 07.2, 08.2, and 08.3. Running time: For the test case (TiC), a single geometry takes 5 to 10 minutes on a typical desktop PC (Intel Pentium 4, 3.4 GHz, 1 GB RAM). The full example including the calculation of the elastic constants and the equation of state, takes 9 hours and 32 minutes.
2012-01-01
Background The Poisson-Boltzmann (PB) equation and its linear approximation have been widely used to describe biomolecular electrostatics. Generalized Born (GB) models offer a convenient computational approximation for the more fundamental approach based on the Poisson-Boltzmann equation, and allows estimation of pairwise contributions to electrostatic effects in the molecular context. Results We have implemented in a single program most common analyses of the electrostatic properties of proteins. The program first computes generalized Born radii, via a surface integral and then it uses generalized Born radii (using a finite radius test particle) to perform electrostic analyses. In particular the ouput of the program entails, depending on user's requirement: 1) the generalized Born radius of each atom; 2) the electrostatic solvation free energy; 3) the electrostatic forces on each atom (currently in a dvelopmental stage); 4) the pH-dependent properties (total charge and pH-dependent free energy of folding in the pH range -2 to 18; 5) the pKa of all ionizable groups; 6) the electrostatic potential at the surface of the molecule; 7) the electrostatic potential in a volume surrounding the molecule; Conclusions Although at the expense of limited flexibility the program provides most common analyses with requirement of a single input file in PQR format. The results obtained are comparable to those obtained using state-of-the-art Poisson-Boltzmann solvers. A Linux executable with example input and output files is provided as supplementary material. PMID:22536964
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan
TOUGH2 and iTOUGH2 are powerful models that simulate the heat and fluid flows in porous and fracture media, and perform parameter estimation, sensitivity analysis and uncertainty propagation analysis. However, setting up the input files is not only tedious, but error prone, and processing output files is time consuming. Here, we present an open source Matlab-based tool (iMatTOUGH) that supports the generation of all necessary inputs for both TOUGH2 and iTOUGH2 and visualize their outputs. The tool links the inputs of TOUGH2 and iTOUGH2, making sure the two input files are consistent. It supports the generation of rectangular computational mesh, i.e.,more » it automatically generates the elements and connections as well as their properties as required by TOUGH2. The tool also allows the specification of initial and time-dependent boundary conditions for better subsurface heat and water flow simulations. The effectiveness of the tool is illustrated by an example that uses TOUGH2 and iTOUGH2 to estimate soil hydrological and thermal properties from soil temperature data and simulate the heat and water flows at the Rifle site in Colorado.« less
Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan
2016-04-01
TOUGH2 and iTOUGH2 are powerful models that simulate the heat and fluid flows in porous and fracture media, and perform parameter estimation, sensitivity analysis and uncertainty propagation analysis. However, setting up the input files is not only tedious, but error prone, and processing output files is time consuming. Here, we present an open source Matlab-based tool (iMatTOUGH) that supports the generation of all necessary inputs for both TOUGH2 and iTOUGH2 and visualize their outputs. The tool links the inputs of TOUGH2 and iTOUGH2, making sure the two input files are consistent. It supports the generation of rectangular computational mesh, i.e.,more » it automatically generates the elements and connections as well as their properties as required by TOUGH2. The tool also allows the specification of initial and time-dependent boundary conditions for better subsurface heat and water flow simulations. The effectiveness of the tool is illustrated by an example that uses TOUGH2 and iTOUGH2 to estimate soil hydrological and thermal properties from soil temperature data and simulate the heat and water flows at the Rifle site in Colorado.« less
Automated Big Data Analysis in Bottom-up and Targeted Proteomics
van der Plas-Duivesteijn, Suzanne; Domański, Dominik; Smith, Derek; Borchers, Christoph; Palmblad, Magnus; Mohamme, Yassene
2014-01-01
Similar to other data intensive sciences, analyzing mass spectrometry-based proteomics data involves multiple steps and diverse software using different algorithms and data formats and sizes. Besides that the distributed and evolving nature of the data in online repositories, another challenge is that a scientists have to deal with many steps of analysis pipelines. A documented data processing is also becoming an essential part for the overall reproducibility of the results. Thanks to different e-Science initiatives, scientific workflow engines have become a means for automated, sharable and reproducible data processing. While these are designed as general tools, they can be employed to solve different challenges that we are facing in handling our Big Data. Here we present three use cases: improving the performance of different spectral search engines by decomposing input data and recomposing the resulting files, building spectral libraries from more than 20 million spectra, and integrating information from multiple resources to select most appropriate peptides for targeted proteomics analyses. The three use cases demonstrate different challenges in exploiting proteomics data analysis. In the first we integrate local and cloud processing resources in order to obtain better performance resulting in more than 30-fold speed improvement. By considering search engines as legacy software our solution is applicable to multiple search algorithms. The second use case is an example of automated processing of many data files of different sizes and locations, starting with raw data and ending with the final, ready-to-use library. This demonstrates the robustness and fault tolerance when dealing with huge amount data stored in multiple files. The third use case demonstrates retrieval and integration of information and data from multiple online repositories. In addition to the diversity of data formats and Web interfaces, this use case also illustrates how to deal with incomplete data.
NASA Astrophysics Data System (ADS)
Verkaik, J.
2013-12-01
The Netherlands Hydrological Instrument (NHI) model predicts water demands in periods of drought, supporting the Dutch decision makers in taking operational as well as long-term decisions with respect to the water supply. Other applications of NHI are predicting fresh-salt interaction, nutrient loadings, and agriculture change. The NHI model consists of several coupled models: a saturated groundwater model (MODFLOW), an unsaturated groundwater model (MetaSWAP), a sub-catchment surface water model (MOZART), and a distribution network of surface waters model (DM/SOBEK). Each of these models requires specific, usually large, input data that may be the result of sophisticated schematization workflows. Input data can also be dependent on each other, for example, the precipitation data is input for the unsaturated zone model (cells) as well as for the surface water models (polygons). For efficient data management, we developed several Python tools such that the modeler or stakeholder can use the model in a user-friendly manner, and data is managed in a consistent, transparent and reproducible way. Two open source Python tools are presented here: the data version control module for the workflow manager VisTrails called FileSync, and the NHI model control script that uses FileSync. VisTrails is an open-source scientific workflow and provenance management system that provides support for simulations, data exploration and visualization. Since VisTrails does not directly support version control we developed a version control module called FileSync. With this generic module, the user can synchronize data from and to his workflow through a dialog window. The FileSync dialog calls the FileSync script that is command-line based and performs the actual data synchronization. This script allows the user to easily create a model repository, upload and download data, create releases and define scenarios. The data synchronization approach applied here differs from systems as Subversion or Git, since these systems do not perform well for large (binary) model data files. For this reason, a new concept of parameterization and data splitting has been implemented. Each file, or set of files, is uniquely labeled as a parameter, and for this parameter metadata is maintained by Subversion. The metadata data contains file hashes to identify data content and the location where the actual bulk data are stored that can be reached by FTP. The NHI model control script is a command-line driven Python script for pre-processing, running, and post-processing the NHI model and uses one single configuration file for all computational kernels. This configuration file is an easy-to-use, keyword-driven, Windows INI-file, having separate sections for all the kernels. It also includes a FileSync data section where the user can specify version controlled model data to be used as input. The NHI control script keeps all the data consistent during the pre-processing. Furthermore, this script is able to do model state handling when the NHI model is used for ensemble forecasting.
Mahesh, MC; Bhandary, Shreetha
2017-01-01
Introduction Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. Aim This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. Materials and Methods In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Results Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than WL-1 (p=0.057 for RaCe files and p=0.087 for K3 files). Conclusion There was no statistical significance between stainless steel hand files and rotary files in terms of crack formation. Instrumentation length had a significant effect on the formation of cracks when rotary files were used. Using rotary instruments 1 mm short of apical foramen caused lesser crack formation. But, there was no statistically significant difference in number of cracks formed with hand files at two instrumentation levels. PMID:28274036
Devale, Madhuri R; Mahesh, M C; Bhandary, Shreetha
2017-01-01
Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than WL-1 (p=0.057 for RaCe files and p=0.087 for K3 files). There was no statistical significance between stainless steel hand files and rotary files in terms of crack formation. Instrumentation length had a significant effect on the formation of cracks when rotary files were used. Using rotary instruments 1 mm short of apical foramen caused lesser crack formation. But, there was no statistically significant difference in number of cracks formed with hand files at two instrumentation levels.
Parallel sort with a ranged, partitioned key-value store in a high perfomance computing environment
Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron; Poole, Stephen W.
2016-01-26
Improved sorting techniques are provided that perform a parallel sort using a ranged, partitioned key-value store in a high performance computing (HPC) environment. A plurality of input data files comprising unsorted key-value data in a partitioned key-value store are sorted. The partitioned key-value store comprises a range server for each of a plurality of ranges. Each input data file has an associated reader thread. Each reader thread reads the unsorted key-value data in the corresponding input data file and performs a local sort of the unsorted key-value data to generate sorted key-value data. A plurality of sorted, ranged subsets of each of the sorted key-value data are generated based on the plurality of ranges. Each sorted, ranged subset corresponds to a given one of the ranges and is provided to one of the range servers corresponding to the range of the sorted, ranged subset. Each range server sorts the received sorted, ranged subsets and provides a sorted range. A plurality of the sorted ranges are concatenated to obtain a globally sorted result.
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.
1991-01-01
Ada Namelist Package, developed for Ada programming language, enables calling program to read and write FORTRAN-style namelist files. Features are: handling of any combination of types defined by user; ability to read vectors, matrices, and slices of vectors and matrices; handling of mismatches between variables in namelist file and those in programmed list of namelist variables; and ability to avoid searching entire input file for each variable. Principle benefits derived by user: ability to read and write namelist-readable files, ability to detect most file errors in initialization phase, and organization keeping number of instantiated units to few packages rather than to many subprograms.
West Flank Coso, CA FORGE 3D geologic model
Doug Blankenship
2016-03-01
This is an x,y,z file of the West Flank FORGE 3D geologic model. Model created in Earthvision by Dynamic Graphic Inc. The model was constructed with a grid spacing of 100 m. Geologic surfaces were extrapolated from the input data using a minimum tension gridding algorithm. The data file is tabular data in a text file, with lithology data associated with X,Y,Z grid points. All the relevant information is in the file header (the spatial reference, the projection etc.) In addition all the fields in the data file are identified in the header.
Fallon FORGE 3D Geologic Model
Doug Blankenship
2016-03-01
An x,y,z scattered data file for the 3D geologic model of the Fallon FORGE site. Model created in Earthvision by Dynamic Graphic Inc. The model was constructed with a grid spacing of 100 m. Geologic surfaces were extrapolated from the input data using a minimum tension gridding algorithm. The data file is tabular data in a text file, with lithology data associated with X,Y,Z grid points. All the relevant information is in the file header (the spatial reference, the projection etc.) In addition all the fields in the data file are identified in the header.
A New Interface for the Magnetics Information Consortium (MagIC) Paleo and Rock Magnetic Database
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Koppers, A. A. P.; Tauxe, L.; Constable, C.; Shaar, R.; Jonestrask, L.
2014-12-01
The Magnetic Information Consortium (MagIC) database (http://earthref.org/MagIC/) continues to improve the ease of uploading data, the creation of complex searches, data visualization, and data downloads for the paleomagnetic, geomagnetic, and rock magnetic communities. Data uploading has been simplified and no longer requires the use of the Excel SmartBook interface. Instead, properly formatted MagIC text files can be dragged-and-dropped onto an HTML 5 web interface. Data can be uploaded one table at a time to facilitate ease of uploading and data error checking is done online on the whole dataset at once instead of incrementally in an Excel Console. Searching the database has improved with the addition of more sophisticated search parameters and with the ability to use them in complex combinations. Searches may also be saved as permanent URLs for easy reference or for use as a citation in a publication. Data visualization plots (ARAI, equal area, demagnetization, Zijderveld, etc.) are presented with the data when appropriate to aid the user in understanding the dataset. Data from the MagIC database may be downloaded from individual contributions or from online searches for offline use and analysis in the tab delimited MagIC text file format. With input from the paleomagnetic, geomagnetic, and rock magnetic communities, the MagIC database will continue to improve as a data warehouse and resource.
15 CFR 995.26 - Conversion of NOAA ENC ® files to other formats.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Conversion of NOAA ENC files to other formats—(1) Content. CEVAD may provide NOAA ENC data in forms other... data files without degradation to positional accuracy or informational content. (2) Software certification. Conversion of NOAA ENC data to other formats must be accomplished within the constraints of IHO...
Early Detection | Division of Cancer Prevention
[[{"fid":"171","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Early
Image Size Variation Influence on Corrupted and Non-viewable BMP Image
NASA Astrophysics Data System (ADS)
Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah
2017-08-01
Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.
76 FR 63575 - Transportation Conformity Rule: MOVES Regional Grace Period Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-13
... written in FORTRAN and used simple text files for data input and output, MOVES2010a is written in JAVA and uses a relational database structure in MYSQL to handle input and output as data tables. These changes...
76 FR 63554 - Transportation Conformity Rule: MOVES Regional Grace Period Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-13
... written in FORTRAN and used simple text files for data input and output, MOVES2010a is written in JAVA and uses a relational database structure in MYSQL to handle input and output as data tables. These changes...
77 FR 11394 - Transportation Conformity Rule: MOVES Regional Grace Period Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-27
... written in FORTRAN and used simple text files for data input and output, MOVES is written in JAVA and uses a relational database structure in MYSQL to handle input and output as data tables.\\13\\ \\13\\ Some...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingargiola, A.; Laurence, T. A.; Boutelle, R.
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less
McDonald, Daniel; Clemente, Jose C; Kuczynski, Justin; Rideout, Jai Ram; Stombaugh, Jesse; Wendel, Doug; Wilke, Andreas; Huse, Susan; Hufnagle, John; Meyer, Folker; Knight, Rob; Caporaso, J Gregory
2012-07-12
We present the Biological Observation Matrix (BIOM, pronounced "biome") format: a JSON-based file format for representing arbitrary observation by sample contingency tables with associated sample and observation metadata. As the number of categories of comparative omics data types (collectively, the "ome-ome") grows rapidly, a general format to represent and archive this data will facilitate the interoperability of existing bioinformatics tools and future meta-analyses. The BIOM file format is supported by an independent open-source software project (the biom-format project), which initially contains Python objects that support the use and manipulation of BIOM data in Python programs, and is intended to be an open development effort where developers can submit implementations of these objects in other programming languages. The BIOM file format and the biom-format project are steps toward reducing the "bioinformatics bottleneck" that is currently being experienced in diverse areas of biological sciences, and will help us move toward the next phase of comparative omics where basic science is translated into clinical and environmental applications. The BIOM file format is currently recognized as an Earth Microbiome Project Standard, and as a Candidate Standard by the Genomic Standards Consortium.
Developing a Complete and Effective ACT-R Architecture
2008-01-01
of computational primitives , as contrasted with the predominant “one-off” and “grab-bag” cognitive models in the field. These architectures have...transport/ semaphore protocols connected via a glue script. Both protocols rely on the fact that file rename and file remove operations are atomic...the Trial Log file until just prior to processing the next input request. Thus, to perform synchronous identifications it is necessary to run an
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations
NASA Astrophysics Data System (ADS)
Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.
Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-26
... applications or print-to-PDF format, and not in a scanned format, at http://www.ferc.gov/docs-filing/efiling....3d 1342 (DC Cir. 2009). \\5\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693... applications or print-to-PDF format and not in a scanned format. Commenters filing electronically do not need...
Capturing and Understanding Experiment Provenance using NiNaC
NASA Astrophysics Data System (ADS)
Rosati, C.
2017-12-01
A problem the model development team faces at the GFDL is determining climate model experiment provenance. Each experiment is configured with at least one configuration file which may reference other files. The experiment then passes through three phases before completion. Configuration files or other input files may be modified between phases. Finding the modifications later is tedious due to the expanse of the experiment input and duplication across phases. Determining provenance may be impossible if any file has been changed or deleted. To reduce these efforts and address these problems, we propose a new toolset, NiNaC, for archiving experiment provenance from the beginning of the experiment to the end and every phase in-between. Each of the three phases, check-out, build, and run, of the experiment depends on the previous phase. We use a graph to model the phase dependencies. Let each phase be represented by a node. Let each edge correspond to a dependency between phases where the node incident with the tail depends on the node incident with the head. It follows that the dependency graph is a tree. We reduce the problem to finding the lowest common ancestor and diffing the successor nodes. All files related to input for a phase are assigned a checksum. A new file is created to aggregate the checksums. Then each phase is assigned a checksum of aforementioned file as an identifier. Any change to part of a phase configuration will create unique checksums in all subsequent phases. Finding differences between experiments with this toolset is as simple as diffing two files containing checksums found by traversing the tree. One new benefit is that this toolset now allows differences in source code to be found after experiments are run, which was previously impossible for executables that cannot be linked to a known version controlled source code. Knowing that these changes exist allows us to give priority to help desk tickets concerning unmodified supported experiment releases, and minimize effort spent on unsupported experiments. It is also possible that a change is made, either by mistake or by system error. NiNaC would find the exact file in the precise phase with the change. In this way, NiNaC makes provenance tracking less tedious and solves problems where tracking provenance may previously have been impossible to do.
A walk through the planned CS building. M.S. Thesis
NASA Technical Reports Server (NTRS)
Khorramabadi, Delnaz
1991-01-01
Using the architectural plan views of our future computer science building as test objects, we have completed the first stage of a Building walkthrough system. The inputs to our system are AutoCAD files. An AutoCAD converter translates the geometrical information in these files into a format suitable for 3D rendering. Major model errors, such as incorrect polygon intersections and random face orientations, are detected and fixed automatically. Interactive viewing and editing tools are provided to view the results, to modify and clean the model and to change surface attributes. Our display system provides a simple-to-use user interface for interactive exploration of buildings. Using only the mouse buttons, the user can move inside and outside the building and change floors. Several viewing and rendering options are provided, such as restricting the viewing frustum, avoiding wall collisions, and selecting different rendering algorithms. A plan view of the current floor, with the position of the eye point and viewing direction on it, is displayed at all times. The scene illumination can be manipulated, by interactively controlling intensity values for 5 light sources.
FNV: light-weight flash-based network and pathway viewer.
Dannenfelser, Ruth; Lachmann, Alexander; Szenk, Mariola; Ma'ayan, Avi
2011-04-15
Network diagrams are commonly used to visualize biochemical pathways by displaying the relationships between genes, proteins, mRNAs, microRNAs, metabolites, regulatory DNA elements, diseases, viruses and drugs. While there are several currently available web-based pathway viewers, there is still room for improvement. To this end, we have developed a flash-based network viewer (FNV) for the visualization of small to moderately sized biological networks and pathways. Written in Adobe ActionScript 3.0, the viewer accepts simple Extensible Markup Language (XML) formatted input files to display pathways in vector graphics on any web-page providing flexible layout options, interactivity with the user through tool tips, hyperlinks and the ability to rearrange nodes on the screen. FNV was utilized as a component in several web-based systems, namely Genes2Networks, Lists2Networks, KEA, ChEA and PathwayGenerator. In addition, FVN can be used to embed pathways inside pdf files for the communication of pathways in soft publication materials. FNV is available for use and download along with the supporting documentation and sample networks at http://www.maayanlab.net/FNV. avi.maayan@mssm.edu.
Boubela, Roland N.; Kalcher, Klaudius; Huf, Wolfgang; Našel, Christian; Moser, Ewald
2016-01-01
Technologies for scalable analysis of very large datasets have emerged in the domain of internet computing, but are still rarely used in neuroimaging despite the existence of data and research questions in need of efficient computation tools especially in fMRI. In this work, we present software tools for the application of Apache Spark and Graphics Processing Units (GPUs) to neuroimaging datasets, in particular providing distributed file input for 4D NIfTI fMRI datasets in Scala for use in an Apache Spark environment. Examples for using this Big Data platform in graph analysis of fMRI datasets are shown to illustrate how processing pipelines employing it can be developed. With more tools for the convenient integration of neuroimaging file formats and typical processing steps, big data technologies could find wider endorsement in the community, leading to a range of potentially useful applications especially in view of the current collaborative creation of a wealth of large data repositories including thousands of individual fMRI datasets. PMID:26778951
HomSI: a homozygous stretch identifier from next-generation sequencing data.
Görmez, Zeliha; Bakir-Gungor, Burcu; Sagiroglu, Mahmut Samil
2014-02-01
In consanguineous families, as a result of inheriting the same genomic segments through both parents, the individuals have stretches of their genomes that are homozygous. This situation leads to the prevalence of recessive diseases among the members of these families. Homozygosity mapping is based on this observation, and in consanguineous families, several recessive disease genes have been discovered with the help of this technique. The researchers typically use single nucleotide polymorphism arrays to determine the homozygous regions and then search for the disease gene by sequencing the genes within this candidate disease loci. Recently, the advent of next-generation sequencing enables the concurrent identification of homozygous regions and the detection of mutations relevant for diagnosis, using data from a single sequencing experiment. In this respect, we have developed a novel tool that identifies homozygous regions using deep sequence data. Using *.vcf (variant call format) files as an input file, our program identifies the majority of homozygous regions found by microarray single nucleotide polymorphism genotype data. HomSI software is freely available at www.igbam.bilgem.tubitak.gov.tr/softwares/HomSI, with an online manual.
Geologic and structure map of the Choteau 1 degree by 2 degrees Quadrangle, western Montana
Mudge, Melville R.; Earhart, Robert L.; Whipple, James W.; Harrison, Jack E.
1982-01-01
The geologic and structure map of Choteau 1 x 2 degree quadrangle (Mudge and others, 1982) was originally converted to a digital format by Jeff Silkwood (U.S. Forest Service and completed by the U.S. Geological Survey staff and contractor at the Spokane Field Office (WA) in 2000 for input into a geographic information system (GIS). The resulting digital geologic map (GIS) database can be queried in many ways to produce a variey of geologic maps. Digital base map data files (topography, roads, towns, rivers and lakes, etc.) are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:250,000 (e.g. 1:100,000 or 1:24,000. The digital geologic map graphics and plot files (chot250k.gra/.hp/.eps and chot-map.pdf) that are provided in the digital package are representations of the digital database. They are not designed to be cartographic products.
Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).
FERMI/GLAST Integrated Trending and Plotting System Release 5.0
NASA Technical Reports Server (NTRS)
Ritter, Sheila; Brumer, Haim; Reitan, Denise
2012-01-01
An Integrated Trending and Plotting System (ITPS) is a trending, analysis, and plotting system used by space missions to determine performance and status of spacecraft and its instruments. ITPS supports several NASA mission operational control centers providing engineers, ground controllers, and scientists with access to the entire spacecraft telemetry data archive for the life of the mission, and includes a secure Web component for remote access. FERMI/GLAST ITPS Release 5.0 features include the option to display dates (yyyy/ddd) instead of orbit numbers along orbital Long-Term Trend (LTT) plot axis, the ability to save statistics from daily production plots as image files, and removal of redundant edit/create Input Definition File (IDF) screens. Other features are a fix to address invalid packet lengths, a change in naming convention of image files in order to use in script, the ability to save all ITPS plot images (from Windows or the Web) as GIF or PNG format, the ability to specify ymin and ymax on plots where previously only the desired range could be specified, Web interface capability to plot IDFs that contain out-oforder page and plot numbers, and a fix to change all default file names to show yyyydddhhmmss time stamps instead of hhmmssdddyyyy. A Web interface capability sorts files based on modification date (with newest one at top), and the statistics block can be displayed via a Web interface. Via the Web, users can graphically view the volume of telemetry data from each day contained in the ITPS archive in the Web digest. The ITPS could be also used in nonspace fields that need to plot data or trend data, including financial and banking systems, aviation and transportation systems, healthcare and educational systems, sales and marketing, and housing and construction.
Bałut, Magdalena; Buckley, Patrick G.; Ochocka, J. Renata; Bartoszewski, Rafał; Crossman, David K.; Messiaen, Ludwine M.; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp). PMID:29432475
Building accurate historic and future climate MEPDG input files for Louisiana DOTD.
DOT National Transportation Integrated Search
2017-02-01
The pavement design process (originally MEPDG, then DARWin-ME, and now Pavement ME Design) requires a multi-year set of hourly : climate input data that influence pavement material properties. In Louisiana, the software provides nine locations with c...
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2010 CFR
2010-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2013 CFR
2013-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2012 CFR
2012-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2014 CFR
2014-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2011 CFR
2011-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
Data Science Bowl Launched to Improve Lung Cancer Screening | Division of Cancer Prevention
[[{"fid":"2078","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl Logo","field_file_image_title_text[und][0][value]":"Data Science Bowl Logo","field_folder[und]":"76"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl
Code of Federal Regulations, 2014 CFR
2014-04-01
... submit a public version of a database in pdf format. The public version of the database must be publicly... interested party that files with the Department a request for an expedited antidumping review, an..., whichever is later. If the interested party that files the request is unable to locate a particular exporter...
47 CFR 1.10008 - What are IBFS file numbers?
Code of Federal Regulations, 2010 CFR
2010-10-01
... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2010-10-01 2010-10-01 false What are IBFS file numbers? 1.10008 Section 1...
47 CFR 1.10008 - What are IBFS file numbers?
Code of Federal Regulations, 2011 CFR
2011-10-01
... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2011-10-01 2011-10-01 false What are IBFS file numbers? 1.10008 Section 1...
2001-11-01
that there were· no· target misses. The Hellfire missile does not have a depleted uranium head . . -,, 2.2.2.3 Tank movement During the test, the...guide otber users through the use of this. complicated program. The_input data files for NOISEMAP consist of a root file name with several extensions...SOURCES subdirectory. This file will have the root file name followed by an accession number, then the .bps extension. The user must check the *.log
VizieR Online Data Catalog: Radiative forces for stellar envelopes (Seaton, 1997)
NASA Astrophysics Data System (ADS)
Seaton, M. J.; Yan, Y.; Mihalas, D.; Pradhan, A. K.
2000-02-01
(1) Primary data files, stages.zz These files give data for the calculation of radiative accelerations, GRAD, for elements with nuclear charge zz. Data are available for zz=06, 07, 08, 10, 11, 12, 13, 14, 16, 18, 20, 24, 25, 26 and 28. Calculations are made using data from the Opacity Project (see papers SYMP and IXZ). The data are given for each ionisation stage, j. They are tabulated on a mesh of (T, Ne, CHI) where T is temperature, Ne electron density and CHI is abundance multiplier. The files include data for ionisation fractions, for each (T, Ne). The file contents are described in the paper ACC and as comments in the code add.f (2) Code add.f This reads a file stages.zz and creates a file acc.zz giving radiative accelerations averaged over ionisation stages. The code prompts for names of input and output files. The code, as provided, gives equal weights (as defined in the paper ACC) to all stages. Th weights are set in SUBROUTINE WEIGHTS, which could be changed to give any weights preferred by the user. The dependence of diffusion coefficients on ionisation stage is given by a function ZET, which is defined in SUBROUTINE ZETA. The expressions used for ZET are as given in the paper. The user can change that subroutine if other expressions are preferred. The output file contains values, ZETBAR, of ZET, averaged over ionisation stages. (3) Files acc.zz Radiative accelerations computed using add.f as provided. The user will need to run the code add.f only if it is required to change the subroutines WEIGHTS or ZETA. The contents of the files acc.zz are described in the paper ACC and in comments contained in the code add.f. (4) Code accfit.f This code gives gives radiative accelerations, and some related data, for a stellar model. Methods used to interpolate data to the values of (T, RHO) for the stellar model are based on those used in the code opfit.for (see the paper OPF). The executable file accfit.com runs accfit.f. It uses a list of files given in accfit.files (see that file for further description). The mesh used for the abundance-multiplier CHI on the output file will generally be finer than that used in the input files acc.zz. The mesh to be used is specified on a file chi.dat. For a test run, the stellar model used is given in the file 10000_4.2 (Teff=10000 K, LOG10(g)=4.2) The output file from that test run is acc100004.2. The contents of the output file are described in the paper ACC and as comments in the code accfit.f. (5) The code diff.f This code reads the output file (e.g. acc1000004.2) created by accfit.f. For any specified depth point in the model and value of CHI, it gives values of radiative accelerations, the quantity ZETBAR required for calculation of diffusion coefficients, and Rosseland-mean opacities. The code prompts for input data. It creates a file recording all data calculated. The code diff.f is intended for incorporation, as a set of subroutines, in codes for diffusion calculations. (1 data file).
Web-phreeq: a WWW instructional tool for modeling the distribution of chemical species in water
NASA Astrophysics Data System (ADS)
Saini-Eidukat, Bernhardt; Yahin, Andrew
1999-05-01
A WWW-based tool, WEB-PHREEQ, was developed for classroom teaching and for routine calculation of low temperature aqueous speciation. Accessible with any computer that has an internet-connected forms-capable WWW-browser, WEB-PHREEQ provides user interface and other support for modeling, creates a properly formatted input file, passes it to the public domain program PHREEQC and returns the output to the WWW browser. Users can calculate the equilibrium speciation of a solution over a range of temperatures or can react solid minerals or gases with a particular water and examine the resulting chemistry. WEB-PHREEQ is one of a number of interactive distributed-computing programs available on the WWW that are of interest to geoscientists.
Elaborate SMART MCNP Modelling Using ANSYS and Its Applications
NASA Astrophysics Data System (ADS)
Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng
2017-09-01
An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.
User's guide to revised method-of-characteristics solute-transport model (MOC--version 31)
Konikow, Leonard F.; Granato, G.E.; Hornberger, G.Z.
1994-01-01
The U.S. Geological Survey computer model to simulate two-dimensional solute transport and dispersion in ground water (Konikow and Bredehoeft, 1978; Goode and Konikow, 1989) has been modified to improve management of input and output data and to provide progressive run-time information. All opening and closing of files are now done automatically by the program. Names of input data files are entered either interactively or using a batch-mode script file. Names of output files, created automatically by the program, are based on the name of the input file. In the interactive mode, messages are written to the screen during execution to allow the user to monitor the status and progress of the simulation and to anticipate total running time. Information reported and updated during a simulation include the current pumping period and time step, number of particle moves, and percentage completion of the current time step. The batch mode enables a user to run a series of simulations consecutively, without additional control. A report of the model's activity in the batch mode is written to a separate output file, allowing later review. The user has several options for creating separate output files for different types of data. The formats are compatible with many commercially available applications, which facilitates graphical postprocessing of model results. Geohydrology and Evaluation of Stream-Aquifer Relations in the Apalachicola-Chattahoochee-Flint River Basin, Southeastern Alabama, Northwestern Florida, and Southwestern Georgia By Lynn J. Torak, Gary S. Davis, George A. Strain, and Jennifer G. Herndon Abstract The lower Apalachieola-Chattahoochec-Flint River Basin is underlain by Coastal Plain sediments of pre-Cretaceous to Quaternary age consisting of alternating units of sand, clay, sandstone, dolomite, and limestone that gradually thicken and dip gently to the southeast. The stream-aquifer system consism of carbonate (limestone and dolomite) and elastic sediments, which define the Upper Floridan aquifer and Intermediate system, in hydraulic connection with the principal rivers of the basin and other surface-water features, natural and man made. Separate digital models of the Upper Flori-dan aquifer and Intermediate system were constructed by using the U.S. Geological Survey's MODular Finite-Element model of two dimensional ground-water flow, based on concep- tualizations of the stream-aquifer system, and calibrated to drought conditions of October 1986. Sensitivity analyses performed on the models indicated that aquifer hydraulic conductivity, lateral and vertical boundary flows, and pumpage have a strong influence on groundwater levels. Simulated pumpage increases in the Upper Floridan aquifer, primarily in the Dougherty Plain physiographic district of Georgia,. caused significant reductions in aquifer discharge to streams that eventually flow to Lake Seminole and the Apalachicola River and Bay. Simulated pumpage increases greater than 3 times the October 1986 rates caused drying ofsome stream reaches and parts of the Upper Floridan aquifer in Georgia. Water budgets prepared from simulation results indicate that ground- water discharge to streams and recharge by horizontal and vertical flow are the principal mechanisms for moving water through the flow system. The potential for changes in ground-water quality is high in areas where chemical constituents can be mobilized by these mechanisms. Less than 2 percent of ground-water discharge to streams comes from the Intermediate system; thus, it plays a minor role in the hydrodynamics of the stream- aquifer system.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-22
... print-to-PDF format and not in a scanned format. Mail/Hand Delivery: Commenters unable to file comments.... FERC, 564 F.3d 1342 (DC Cir. 2009). 3. In March 2007, the Commission issued Order No. 693, evaluating... should be filed in native applications or print-to-PDF format and not in a scanned format. Commenters...
Code of Federal Regulations, 2010 CFR
2010-10-01
... recording under § 67.200 may be submitted in portable document format (.pdf) as an attachment to electronic... submitted for filing in .pdf format pertains to a vessel that is not a currently documented vessel, a... with the National Vessel Documentation Center or must be submitted in .pdf format with the instrument...
1985-02-01
li’Lii El. IE F INE ,UT 1 = K MM. * GET, NAST484/UN=SYSTEM. E(EGIN, ,NAST464. PFL, 160000, RED’UCE(-). LINKI , L~DDEDDD Figure A-I1 Typical Control-Card...initiated via Che LINKI statement, in which the second term is the input data file. The permanent file name KMDM, shown in conjunction with local file
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, John; Castillo, Andrew
2016-09-21
This software contains a set of python modules – input, search, cluster, analysis; these modules read input files containing spatial coordinates and associated attributes which can be used to perform nearest neighbor search (spatial indexing via kdtree), cluster analysis/identification, and calculation of spatial statistics for analysis.
VizieR Online Data Catalog: Partition functions for molecules and atoms (Barklem+, 2016)
NASA Astrophysics Data System (ADS)
Barklem, P. S.; Collet, R.
2016-02-01
The results and input data are presented in the following files. Table 1 contains dissociation energies from the literature, and final adopted values, for 291 molecules. The literature values are from the compilations of Huber & Herzberg (1979, Constants of Diatomic Molecules (Van Nostrand Reinhold), Luo (2007, Comprehensive Handbook of Chemical Bond Energies (CRC Press)) and G2 theory calculations of Curtiss et al. (1991, J. Chem. Phys., 94, 7221). Table 2 contains the input data for the molecular calculations including adopted dissociation energy, nuclear spins, molecular spectroscopic constants and their sources. There are 291 files, one for each molecule, labelled by the molecule name. The various molecular spectroscopic constants are as defined in the paper. Table 4 contains the first, second and third ionisation energies for all chemical elements from H to U. The data comes from the CRC Handbook of Chemistry and Physics (Haynes, W.M. 2010, CRC Handbook of Chemistry and Physics, 91st edn. (CRC Press, Taylor and Francis Group)). Table 5a contains a list of keys to bibliographic references for the atomic energy level data that was extracted from NIST Atomic Spectra Database and used in the present work to compute atomic partition functions. The citation keys are abbreviations of the full bibliographic references which are made available in Table 5b in BibTeX format. Table 5b contains the full bibliographic references for the atomic energy level data that was extracted from the NIST Atomic Spectra Database. Table 6 contains tabulated partition function data as a function of temperature for 291 molecules. Table 7 contains tabulated equilibrium constant data as a function of temperature for 291 molecules. Table 8 contains tabulated partition function data as a function of temperature for 284 atoms and ions. The paper should be consulted for further details. (10 data files).
EPA Remote Sensing Information Gateway
NASA Astrophysics Data System (ADS)
Paulsen, H. K.; Szykman, J. J.; Plessel, T.; Freeman, M.; Dimmick, F.
2009-12-01
The Remote Sensing Information Gateway was developed by the U.S. Environmental Protection Agency (EPA) to assist researchers in easily obtaining and combining a variety of environmental datasets related to air quality research. Current datasets available include, but are not limited to surface PM2.5 and O3 data, satellite derived aerosol optical depth , and 3-dimensional output from U.S. EPA's Models 3/Community Multi-scale Air Quality (CMAQ) modeling system. The presentation will include a demonstration that illustrates several scenarios of how researchers use the tool to help them visualize and obtain data for their work; with a particular focus on episode analysis related to biomass burning impacts on air quality. The presentation will provide an overview on how RSIG works and how the code has been—and can be—adapted for other projects. One example is the Virtual Estuary, which focuses on automating the retrieval and pre-processing of a variety of data needed for estuarine research. RSIG’s source codes are freely available to researchers with permission from the EPA principal investigator, Dr. Jim Szykman. RSIG is available to the community and can be accessed online at http://www.epa.gov/rsig. Once the JAVA policy file is configured on your computer you can run the RSIG applet on your computer and connect to the RSIG server to visualize and retrieve available data sets. The applet allows the user to specify the temporal/spatial areas of interest, and the types of data to retrieve. The applet then communicates with RSIG subsetter codes located on the data owners’ remote servers; the subsetter codes assemble and transfer via ordinary Internet protocols only the specified data to the researcher’s computer. This is much faster than the usual method of transferring large files via FTP and greatly reduces network traffic. The RSIG applet then visualizes the transferred data on a latitude-longitude map, automatically locating the data in the correct geographic position. Images, animations, and aggregated data can be saved or exported in a variety of data formats: Binary External Data Representation (XDR) format (simple, efficient), NetCDF-COARDS format, NetCDF-IOAPI format (regridding the data to a CMAQ grid), HDF (unsubsetted satellite files), ASCII tab-delimited spreadsheet, MCMC (used for input into HB model), PNG images, MPG movies, KMZ movies (for display in Google Earth and similar applications), GeoTIFF RGB format and 32-bit float format. RSIG’s source codes are freely available to researchers with permission from the EPA. Contacts for obtaining RSIG code are available at the RSIG website.
Pancreatic Cancer Detection Consortium (PCDC) | Division of Cancer Prevention
[[{"fid":"2256","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso highlighting the pancreas.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso
Reprocessing of multi-channel seismic-reflection data collected in the Beaufort Sea
Agena, W.F.; Lee, Myung W.; Hart, P.E.
2000-01-01
Contained on this set of two CD-ROMs are stacked and migrated multi-channel seismic-reflection data for 65 lines recorded in the Beaufort Sea by the United States Geological Survey in 1977. All data were reprocessed by the USGS using updated processing methods resulting in improved interpretability. Each of the two CD-ROMs contains the following files: 1) 65 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 65 lines in standard SEG-P1 format; 3) an ASCII text file with cross-reference information for relating the sequential trace numbers on each line to cdp numbers and shotpoint numbers; 4) 2 small scale graphic images (stacked and migrated) of a segment of line 722 in Adobe Acrobat (R) PDF format; 5) a graphic image of the location map, generated from the navigation file; 6) PlotSeis, an MS-DOS Application that allows PC users to interactively view the SEG-Y files; 7) a PlotSeis documentation file; and 8) an explanation of the processing used to create the final seismic sections (this document).
Manoukis, Nicholas C
2007-07-01
There has been a great increase in both the number of population genetic analysis programs and the size of data sets being studied with them. Since the file formats required by the most popular and useful programs are variable, automated reformatting or conversion between them is desirable. formatomatic is an easy to use program that can read allelic data files in genepop, raw (csv) or convert formats and create data files in nine formats: raw (csv), arlequin, genepop, immanc/bayesass +, migrate, newhybrids, msvar, baps and structure. Use of formatomatic should greatly reduce time spent reformatting data sets and avoid unnecessary errors.
A New Source Biasing Approach in ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevill, Aaron M; Mosher, Scott W
2012-01-01
The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorokine, Alexandre
2011-10-01
Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layout system using customized styles.
LAS - LAND ANALYSIS SYSTEM, VERSION 5.0
NASA Technical Reports Server (NTRS)
Pease, P. B.
1994-01-01
The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.
2012-01-01
Background We present the Biological Observation Matrix (BIOM, pronounced “biome”) format: a JSON-based file format for representing arbitrary observation by sample contingency tables with associated sample and observation metadata. As the number of categories of comparative omics data types (collectively, the “ome-ome”) grows rapidly, a general format to represent and archive this data will facilitate the interoperability of existing bioinformatics tools and future meta-analyses. Findings The BIOM file format is supported by an independent open-source software project (the biom-format project), which initially contains Python objects that support the use and manipulation of BIOM data in Python programs, and is intended to be an open development effort where developers can submit implementations of these objects in other programming languages. Conclusions The BIOM file format and the biom-format project are steps toward reducing the “bioinformatics bottleneck” that is currently being experienced in diverse areas of biological sciences, and will help us move toward the next phase of comparative omics where basic science is translated into clinical and environmental applications. The BIOM file format is currently recognized as an Earth Microbiome Project Standard, and as a Candidate Standard by the Genomic Standards Consortium. PMID:23587224
The Aegean Sea marine security decision support system
NASA Astrophysics Data System (ADS)
Perivoliotis, L.; Krokos, G.; Nittis, K.; Korres, G.
2011-05-01
As part of the integrated ECOOP (European Coastal Sea Operational observing and Forecasting System) project, HCMR upgraded the already existing standalone Oil Spill Forecasting System for the Aegean Sea, initially developed for the Greek Operational Oceanography System (POSEIDON), into an active element of the European Decision Support System (EuroDeSS). The system is accessible through a user friendly web interface where the case scenarios can be fed into the oil spill drift model component, while the synthetic output contains detailed information about the distribution of oil spill particles and the oil spill budget and it is provided both in text based ECOOP common output format and as a series of sequential graphics. The main development steps that were necessary for this transition were the modification of the forcing input data module in order to allow the import of other system products which are usually provided in standard formats such as NetCDF and the transformation of the model's calculation routines to allow use of current, density and diffusivities data in z instead of sigma coordinates. During the implementation of the Aegean DeSS, the system was used in operational mode in order support the Greek marine authorities in handling a real accident that took place in North Aegean area. Furthermore, the introduction of common input and output files by all the partners of EuroDeSS extended the system's interoperability thus facilitating data exchanges and comparison experiments.
The Aegean sea marine security decision support system
NASA Astrophysics Data System (ADS)
Perivoliotis, L.; Krokos, G.; Nittis, K.; Korres, G.
2011-10-01
As part of the integrated ECOOP (European Coastal Sea Operational observing and Forecasting System) project, HCMR upgraded the already existing standalone Oil Spill Forecasting System for the Aegean Sea, initially developed for the Greek Operational Oceanography System (POSEIDON), into an active element of the European Decision Support System (EuroDeSS). The system is accessible through a user friendly web interface where the case scenarios can be fed into the oil spill drift model component, while the synthetic output contains detailed information about the distribution of oil spill particles and the oil spill budget and it is provided both in text based ECOOP common output format and as a series of sequential graphics. The main development steps that were necessary for this transition were the modification of the forcing input data module in order to allow the import of other system products which are usually provided in standard formats such as NetCDF and the transformation of the model's calculation routines to allow use of current, density and diffusivities data in z instead of sigma coordinates. During the implementation of the Aegean DeSS, the system was used in operational mode in order to support the Greek marine authorities in handling a real accident that took place in North Aegean area. Furthermore, the introduction of common input and output files by all the partners of EuroDeSS extended the system's interoperability thus facilitating data exchanges and comparison experiments.
76 FR 47606 - Sport Fishing and Boating Partnership Council
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-05
... the following formats: One hard copy with original signature, and one electronic copy via e- mail (acceptable file formats are Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or rich text file...
Measles, Mumps, and Rubella (MMR) Vaccination: What Everyone Should Know
... rubella combination vaccine Measles=Rubeola Measles=”10-day”, “hard” and “red” measles MMRV=measles, mumps, rubella, and varicella combination vaccine File Formats Help: How do I view different file formats ( ...
78 FR 19152 - Revisions to Modeling, Data, and Analysis Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-29
... processing software should be filed in native applications or print-to-PDF format and not in a scanned format...,126 (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D.C. Cir. 2009). 3. In March 2007, the... print-to-PDF format and not in a scanned format. Commenters filing electronically do not need to make a...
76 FR 75898 - Sport Fishing and Boating Partnership Council
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-05
... following formats: One hard copy with original signature, and one electronic copy via email (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/Windows 98/2000/XP format). Please submit your statement to Douglas Hobbs, Council Coordinator (see FOR FURTHER...
NASA Astrophysics Data System (ADS)
Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi
2017-07-01
We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.
14 CFR 221.195 - Requirement for filing printed material.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Electronically Filed Tariffs § 221.195 Requirement for filing printed material. (a) Any tariff, or revision thereto, filed in paper format which accompanies....190(b). Further, such paper tariff, or revision thereto, shall be filed in accordance with the...
18 CFR 35.7 - Electronic filing requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...
18 CFR 35.7 - Electronic filing requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...
18 CFR 35.7 - Electronic filing requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...
18 CFR 35.7 - Electronic filing requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...
NIMBUS 7 Earth Radiation Budget (ERB) Matrix User's Guide. Volume 2: Tape Specifications
NASA Technical Reports Server (NTRS)
Ray, S. N.; Vasanth, K. L.
1984-01-01
The ERB MATRIX tape is generated by an IBM 3081 computer program and is a 9 track, 1600 BPI tape. The gross format of the tape given on Page 1, shows an initial standard header file followed by data files. The standard header file contains two standard header records. A trailing documentation file (TDF) is the last file on the tape. Pages 9 through 17 describe, in detail, the standard header file and the TDF. The data files contain data for 37 different ERB parameters. Each file has data based on either a daily, 6 day cyclic, or monthly time interval. There are three types of physical records in the data files; namely, the world grid physical record, the documentation mercator/polar map projection physical record, and the monthly calibration physical record. The manner in which the data for the 37 ERB parameters are stored in the physical records comprising the data files, is given in the gross format section.
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
d-Omix: a mixer of generic protein domain analysis tools.
Wichadakul, Duangdao; Numnark, Somrak; Ingsriswang, Supawadee
2009-07-01
Domain combination provides important clues to the roles of protein domains in protein function, interaction and evolution. We have developed a web server d-Omix (a Mixer of Protein Domain Analysis Tools) aiming as a unified platform to analyze, compare and visualize protein data sets in various aspects of protein domain combinations. With InterProScan files for protein sets of interest provided by users, the server incorporates four services for domain analyses. First, it constructs protein phylogenetic tree based on a distance matrix calculated from protein domain architectures (DAs), allowing the comparison with a sequence-based tree. Second, it calculates and visualizes the versatility, abundance and co-presence of protein domains via a domain graph. Third, it compares the similarity of proteins based on DA alignment. Fourth, it builds a putative protein network derived from domain-domain interactions from DOMINE. Users may select a variety of input data files and flexibly choose domain search tools (e.g. hmmpfam, superfamily) for a specific analysis. Results from the d-Omix could be interactively explored and exported into various formats such as SVG, JPG, BMP and CSV. Users with only protein sequences could prepare an InterProScan file using a service provided by the server as well. The d-Omix web server is freely available at http://www.biotec.or.th/isl/Domix.
Food Composition Database Format and Structure: A User Focused Approach
Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine
2015-01-01
This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836
Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E.; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders
2018-01-01
Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data. PMID:29706879
Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders
2018-01-01
Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data.
NASA Technical Reports Server (NTRS)
Srivastava, R.; Reddy, T. S. R.
1996-01-01
This guide describes the input data required, for steady or unsteady aerodynamic and aeroelastic analysis of propellers and the output files generated, in using PROP3D. The aerodynamic forces are obtained by solving three dimensional unsteady, compressible Euler equations. A normal mode structural analysis is used to obtain the aeroelastic equations, which are solved using either time domain or frequency domain solution method. Sample input and output files are included in this guide for steady aerodynamic analysis of single and counter-rotation propellers, and aeroelastic analysis of single-rotation propeller.
LocalMove: computing on-lattice fits for biopolymers
Ponty, Y.; Istrate, R.; Porcelli, E.; Clote, P.
2008-01-01
Given an input Protein Data Bank file (PDB) for a protein or RNA molecule, LocalMove is a web server that determines an on-lattice representation for the input biomolecule. The web server implements a Markov Chain Monte-Carlo algorithm with simulated annealing to compute an approximate fit for either the coarse-grain model or backbone model on either the cubic or face-centered cubic lattice. LocalMove returns a PDB file as output, as well as dynamic movie of 3D images of intermediate conformations during the computation. The LocalMove server is publicly available at http://bioinformatics.bc.edu/clotelab/localmove/. PMID:18556754
A Computer System for a Union Catalog: Theme and Variations *
Felter, Jacqueline W.; Tjoeng, Djoeng S.
1965-01-01
This article describes a computer system for the generation and maintenance of a union catalog of periodicals and for printouts of both the entire file and selected portions. Although the system was designed to meet the specifications of the Union Catalog of Medical Periodicals of New York, its use is not limited. Only the basic file maintenance program is indispensable; the subsidiary programs may be used as needed. The scope and content of the catalog are determined by the input. The preparation of the input is described in detail, with comment on the keypunching of library records. Applications to other kinds of catalogs are suggested. PMID:14271111
Recursive Feature Extraction in Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-14
ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.
An information retrieval system for research file data
Joan E. Lengel; John W. Koning
1978-01-01
Research file data have been successfully retrieved at the Forest Products Laboratory through a high-speed cross-referencing system involving the computer program FAMULUS as modified by the Madison Academic Computing Center at the University of Wisconsin. The method of data input, transfer to computer storage, system utilization, and effectiveness are discussed....
Five Tips to Help Prevent Infections
... Information For… Media Policy Makers 5 Tips to Help Prevent Infections Language: English (US) Español (Spanish) Recommend ... Makers Language: English (US) Español (Spanish) File Formats Help: How do I view different file formats (PDF, ...