Sample records for compilers operating systems

  1. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The functional requirements to be met by the HAL/S-FC compiler, and the hardware and software compatibilities between the compiler system and the environment in which it operates are defined. Associated runtime facilities and the interface with the Software Development Laboratory are specified. The construction of the HAL/S-FC system as functionally separate units and the interfaces between those units is described. An overview of the system's capabilities is presented and the hardware/operating system requirements are specified. The computer-dependent aspects of the HAL/S-FC are also specified. Compiler directives are included.

  2. Ada 9X Project Revision Request Report. Supplement 1

    DTIC Science & Technology

    1990-01-01

    Non-portable use of operating system primitives or of Ada run time system internals. POSSIBLE SOLUTIONS: Mandate that compilers recognize tasks that...complex than a simple operating system file, the compiler vendor must provide routines to manipulate it (create, copy, move etc .) as a single entity... system , to support fault tolerance, load sharing, change of system operating mode etc . It is highly desirable that such important software be written in

  3. HAL/S-FC compiler system specifications

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This document specifies the informational interfaces within the HAL/S-FC compiler, and between the compiler and the external environment. This Compiler System Specification is for the HAL/S-FC compiler and its associated run time facilities which implement the full HAL/S language. The HAL/S-FC compiler is designed to operate stand-alone on any compatible IBM 360/370 computer and within the Software Development Laboratory (SDL) at NASA/JSC, Houston, Texas.

  4. Ada Compiler Validation Summary Report: Certificate Number: 900121S1. 10251 Computer Sciences Corporation MC Ada V1.2.Beta/Concurrent Computer Corporation Concurrent/Masscomp 5600 Host To Concurrent/Masscomp 5600 (Dual 68020 Processor Configuration) Target

    DTIC Science & Technology

    1990-04-23

    developed Ada Real - Time Operating System (ARTOS) for bare machine environments(Target), ACW 1.1I0. " ; - -M.UIECTTERMS Ada programming language, Ada...configuration) Operating System: CSC developed Ada Real - Time Operating System (ARTOS) for bare machine environments Memory Size: 4MB 2.2...Test Method Testing of the MC Ado V1.2.beta/ Concurrent Computer Corporation compiler and the CSC developed Ada Real - Time Operating System (ARTOS) for

  5. How do I resolve problems reading the binary data?

    Atmospheric Science Data Center

    2014-12-08

    ... affecting compilation would be differing versions of the operating system and compilers the read software are being run on. Big ... Unix machines are Big Endian architecture while Linux systems are Little Endian architecture. Data generated on a Unix machine are ...

  6. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  7. Programs for Testing Processor-in-Memory Computing Systems

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  8. Timing characterization and analysis of the Linux-based, closed loop control computer for the Subaru Telescope laser guide star adaptive optics system

    NASA Astrophysics Data System (ADS)

    Dinkins, Matthew; Colley, Stephen

    2008-07-01

    Hardware and software specialized for real time control reduce the timing jitter of executables when compared to off-the-shelf hardware and software. However, these specialized environments are costly in both money and development time. While conventional systems have a cost advantage, the jitter in these systems is much larger and potentially problematic. This study analyzes the timing characterstics of a standard Dell server running a fully featured Linux operating system to determine if such a system would be capable of meeting the timing requirements for closed loop operations. Investigations are preformed on the effectiveness of tools designed to make off-the-shelf system performance closer to specialized real time systems. The Gnu Compiler Collection (gcc) is compared to the Intel C Compiler (icc), compiler optimizations are investigated, and real-time extensions to Linux are evaluated.

  9. SUMC/MPOS/HAL interface study

    NASA Technical Reports Server (NTRS)

    Saponaro, J. A.; Kosmala, A. L.

    1973-01-01

    The implementation of the HAL/S language on the IBM-360, and in particular the mechanization of its real time, I/O, and error control statements within the OS-360 environment is described. The objectives are twofold: (1) An analysis and general description of HAL/S real time, I/O, and error control statements and the structure required to mechanize these statements. The emphasis is on describing the logical functions performed upon execution of each HAL statement rather than defining whether it is accomplished by the compiler or operating system. (2) An identification of the OS-360 facilities required during execution of HAL/S code as implemented for the current HAL/S-360 compiler; and an evaluation of the aspects involved with interfacing HAL/S with the SUMC operating system utilizing either the HAL/S-360 compiler or by designing a new HAL/S-SUMC compiler.

  10. VizieR Online Data Catalog: Habitable zones around main-sequence stars (Kopparapu+, 2014)

    NASA Astrophysics Data System (ADS)

    Kopparapu, R. K.; Ramirez, R. M.; Schottelkotte, J.; Kasting, J. F.; Domagal-Goldman, S.; Eymet, V.

    2017-08-01

    Language: Fortran 90 Code tested under the following compilers/operating systems: ifort/CentOS linux Description of input data: No input necessary. Description of output data: Output files: HZs.dat, HZ_coefficients.dat System requirements: No major system requirement. Fortran compiler necessary. Calls to external routines: None. Additional comments: None (1 data file).

  11. HAL/S-360 compiler test activity report

    NASA Technical Reports Server (NTRS)

    Helmers, C. T.

    1974-01-01

    The levels of testing employed in verifying the HAL/S-360 compiler were as follows: (1) typical applications program case testing; (2) functional testing of the compiler system and its generated code; and (3) machine oriented testing of compiler implementation on operational computers. Details of the initial test plan and subsequent adaptation are reported, along with complete test results for each phase which examined the production of object codes for every possible source statement.

  12. A compiler and validator for flight operations on NASA space missions

    NASA Astrophysics Data System (ADS)

    Fonte, Sergio; Politi, Romolo; Capria, Maria Teresa; Giardino, Marco; De Sanctis, Maria Cristina

    2016-07-01

    In NASA missions the management and the programming of the flight systems is performed by a specific scripting language, the SASF (Spacecraft Activity Sequence File). In order to perform a check on the syntax and grammar it is necessary a compiler that stress the errors (eventually) found in the sequence file produced for an instrument on board the flight system. In our experience on Dawn mission, we developed VIRV (VIR Validator), a tool that performs checks on the syntax and grammar of SASF, runs a simulations of VIR acquisitions and eventually finds violation of the flight rules of the sequences produced. The project of a SASF compiler (SSC - Spacecraft Sequence Compiler) is ready to have a new implementation: the generalization for different NASA mission. In fact, VIRV is a compiler for a dialect of SASF; it includes VIR commands as part of SASF language. Our goal is to produce a general compiler for the SASF, in which every instrument has a library to be introduced into the compiler. The SSC can analyze a SASF, produce a log of events, perform a simulation of the instrument acquisition and check the flight rules for the instrument selected. The output of the program can be produced in GRASS GIS format and may help the operator to analyze the geometry of the acquisition.

  13. Utilizing automatic identification tracking systems to compile operational field and structure data : [research summary].

    DOT National Transportation Integrated Search

    2014-05-01

    Thefederallymandatedmaterialsclearanceprocessrequiresstatetransportation : agenciestosubjectallconstructionfieldsamplestoqualitycontrol/assurancetestingin : ordertopassstandardizedstateinspections....

  14. Ada 9X Project Report, A Study of Implementation-Dependent Pragmas and Attributes in Ada

    DTIC Science & Technology

    1989-11-01

    here communicatons with the vendor were often required to firmly establish the behavior of some implementation-dependent features CMU-SEI-SR-89-19 3 2.2...compilers), by potential market penetration (percent coverage of all surveyed implementations), and by cross-compiler influence (percentage of cross...operations in the context of a tightly integrated development environment, specific underlying operating system services (beneath the Ada run- time kernel

  15. JANUS: A Compilation System for Balancing Parallelism and Performance in OpenVX

    NASA Astrophysics Data System (ADS)

    Omidian, Hossein; Lemieux, Guy G. F.

    2018-04-01

    Embedded systems typically do not have enough on-chip memory for entire an image buffer. Programming systems like OpenCV operate on entire image frames at each step, making them use excessive memory bandwidth and power. In contrast, the paradigm used by OpenVX is much more efficient; it uses image tiling, and the compilation system is allowed to analyze and optimize the operation sequence, specified as a compute graph, before doing any pixel processing. In this work, we are building a compilation system for OpenVX that can analyze and optimize the compute graph to take advantage of parallel resources in many-core systems or FPGAs. Using a database of prewritten OpenVX kernels, it automatically adjusts the image tile size as well as using kernel duplication and coalescing to meet a defined area (resource) target, or to meet a specified throughput target. This allows a single compute graph to target implementations with a wide range of performance needs or capabilities, e.g. from handheld to datacenter, that use minimal resources and power to reach the performance target.

  16. Water Quality in Small Community Distribution Systems. A Reference Guide for Operators

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has developed this reference guide to assist the operators and managers of small- and medium-sized public water systems. This compilation provides a comprehensive picture of the impact of the water distribution system network on dist...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chao

    Sparx, a new environment for Cryo-EM image processing; Cryo-EM, Single particle reconstruction, principal component analysis; Hardware Req.: PC, MAC, Supercomputer, Mainframe, Multiplatform, Workstation. Software Req.: operating system is Unix; Compiler C++; type of files: source code, object library, executable modules, compilation instructions; sample problem input data. Location/transmission: http://sparx-em.org; User manual & paper: http://sparx-em.org;

  18. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  19. Ada (Tradename) Compiler Validation Summary Report. International Business Machines Corporation. IBM Development System for the Ada Language for VM/CMS, Version 1.0. IBM 4381 (IBM System/370) under VM/CMS.

    DTIC Science & Technology

    1986-04-29

    COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for VM/CMS, Version 1.0 IBM 4381...tested using command scripts provided by International Business Machines Corporation. These scripts were reviewed by the validation team. Test.s were run...s): IBM 4381 (System/370) Operating System: VM/CMS, release 3.6 International Business Machines Corporation has made no deliberate extensions to the

  20. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  1. HAL/S-360 compiler system specification

    NASA Technical Reports Server (NTRS)

    Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.

    1974-01-01

    A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.

  2. HAL/S - The programming language for Shuttle

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1974-01-01

    HAL/S is a higher order language and system, now operational, adopted by NASA for programming Space Shuttle on-board software. Program reliability is enhanced through language clarity and readability, modularity through program structure, and protection of code and data. Salient features of HAL/S include output orientation, automatic checking (with strictly enforced compiler rules), the availability of linear algebra, real-time control, a statement-level simulator, and compiler transferability (for applying HAL/S to additional object and host computers). The compiler is described briefly.

  3. Module generation for self-testing integrated systems

    NASA Astrophysics Data System (ADS)

    Vanriessen, Ronald Pieter

    Hardware used for self test in VLSI (Very Large Scale Integrated) systems is reviewed, and an architecture to control the test hardware in an integrated system is presented. Because of the increase of test times, the use of self test techniques has become practically and economically viable for VLSI systems. Beside the reduction in test times and costs, self test also provides testing at operational speeds. Therefore, a suitable combination of scan path and macrospecific (self) tests is required to reduce test times and costs. An expert system that can be used in a silicon compilation environment is presented. The approach requires a minimum of testability knowledge from a system designer. A user friendly interface was described for specifying and modifying testability requirements by a testability expert. A reason directed backtracking mechanism is used to solve selection failures. Both the hierarchical testable architecture and the design for testability expert system are used in a self test compiler. The definition of a self test compiler was given. A self test compiler is a software tool that selects an appropriate test method for every macro in a design. The hardware to control a macro test will be included in the design automatically. As an example, the integration of the self-test compiler in a silicon compilation system PIRAMID was described. The design of a demonstrator circuit by self test compiler is described. This circuit consists of two self testable macros. Control of the self test hardware is carried out via the test access port of the boundary scan standard.

  4. Emergency Planning for Municipal Wastewater Treatment Facilities.

    ERIC Educational Resources Information Center

    Lemon, R. A.; And Others

    This manual for the development of emergency operating plans for municipal wastewater treatment systems was compiled using information provided by over two hundred municipal treatment systems. It covers emergencies caused by natural disasters, civil disorders and strikes, faulty maintenance, negligent operation, and accidents. The effects of such…

  5. Timeliner: Automating Procedures on the ISS

    NASA Technical Reports Server (NTRS)

    Brown, Robert; Braunstein, E.; Brunet, Rick; Grace, R.; Vu, T.; Zimpfer, Doug; Dwyer, William K.; Robinson, Emily

    2002-01-01

    Timeliner has been developed as a tool to automate procedural tasks. These tasks may be sequential tasks that would typically be performed by a human operator, or precisely ordered sequencing tasks that allow autonomous execution of a control process. The Timeliner system includes elements for compiling and executing sequences that are defined in the Timeliner language. The Timeliner language was specifically designed to allow easy definition of scripts that provide sequencing and control of complex systems. The execution environment provides real-time monitoring and control based on the commands and conditions defined in the Timeliner language. The Timeliner sequence control may be preprogrammed, compiled from Timeliner "scripts," or it may consist of real-time, interactive inputs from system operators. In general, the Timeliner system lowers the workload for mission or process control operations. In a mission environment, scripts can be used to automate spacecraft operations including autonomous or interactive vehicle control, performance of preflight and post-flight subsystem checkouts, or handling of failure detection and recovery. Timeliner may also be used for mission payload operations, such as stepping through pre-defined procedures of a scientific experiment.

  6. Evaluation of HAL/S language compilability using SAMSO's Compiler Writing System (CWS)

    NASA Technical Reports Server (NTRS)

    Feliciano, M.; Anderson, H. D.; Bond, J. W., III

    1976-01-01

    NASA/Langley is engaged in a program to develop an adaptable guidance and control software concept for spacecraft such as shuttle-launched payloads. It is envisioned that this flight software be written in a higher-order language, such as HAL/S, to facilitate changes or additions. To make this adaptable software transferable to various onboard computers, a compiler writing system capability is necessary. A joint program with the Air Force Space and Missile Systems Organization was initiated to determine if the Compiler Writing System (CWS) owned by the Air Force could be utilized for this purpose. The present study explores the feasibility of including the HAL/S language constructs in CWS and the effort required to implement these constructs. This will determine the compilability of HAL/S using CWS and permit NASA/Langley to identify the HAL/S constructs desired for their applications. The study consisted of comparing the implementation of the Space Programming Language using CWS with the requirements for the implementation of HAL/S. It is the conclusion of the study that CWS already contains many of the language features of HAL/S and that it can be expanded for compiling part or all of HAL/S. It is assumed that persons reading and evaluating this report have a basic familiarity with (1) the principles of compiler construction and operation, and (2) the logical structure and applications characteristics of HAL/S and SPL.

  7. Kernel and System Procedures in Flex.

    DTIC Science & Technology

    1983-08-01

    System procedures on which the operating system for the Flex computer is based. These are the low level rOCedures Whbich are used to implement the compilers, file-store* coummand interpreters etc on Flex. 168 ... System procedures on which the operating system for the Flex computer is based. These are the low level procedures which are used to implement the...privileged mode. They form the interface between the user and a particular operating system written on top of the Kernel.

  8. Python based high-level synthesis compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard

    2014-11-01

    This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.

  9. Proving Correctness for Pointer Programs in a Verifying Compiler

    NASA Technical Reports Server (NTRS)

    Kulczycki, Gregory; Singh, Amrinder

    2008-01-01

    This research describes a component-based approach to proving the correctness of programs involving pointer behavior. The approach supports modular reasoning and is designed to be used within the larger context of a verifying compiler. The approach consists of two parts. When a system component requires the direct manipulation of pointer operations in its implementation, we implement it using a built-in component specifically designed to capture the functional and performance behavior of pointers. When a system component requires pointer behavior via a linked data structure, we ensure that the complexities of the pointer operations are encapsulated within the data structure and are hidden to the client component. In this way, programs that rely on pointers can be verified modularly, without requiring special rules for pointers. The ultimate objective of a verifying compiler is to prove-with as little human intervention as possible-that proposed program code is correct with respect to a full behavioral specification. Full verification for software is especially important for an agency like NASA that is routinely involved in the development of mission critical systems.

  10. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  11. Extension of Alvis compiler front-end

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr, E-mail: mwypych@agh.edu.pl, E-mail: mszpyrka@agh.edu.pl, E-mail: ptm@agh.edu.pl

    2015-12-31

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters’ types and operations on them. Thanks to the compiler’s modular construction many aspects of compilation of an Alvis model may be modified. Providingmore » new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.« less

  12. The integrated business information system: using automation to monitor cost-effectiveness of park operations

    Treesearch

    Dick Stanley; Bruce Jackson

    1995-01-01

    The cost-effectiveness of park operations is often neglected because information is laborious to compile. The information, however, is critical if we are to derive maximum benefit from scarce resources. This paper describes an automated system for calculating cost-effectiveness ratios with minimum effort using data from existing data bases.

  13. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  14. Adaptive Boiler Controls: Market Survey and Appraisal of a Prototype System

    DTIC Science & Technology

    1994-06-01

    be considered. Operating staff sizes and experience are declining . Operators often lack experience and expertise to make the overall operating...Quick C, V2.5 or higher or b. Aztec CBS Compiler, v4.2 or higher $1,000 7. Uninterruptable Power Supply (for example, Superior Electric Company Model

  15. Compilation of Trade Studies for the Constellation Program Extravehicular Activity Spacesuit Power System

    NASA Technical Reports Server (NTRS)

    Fincannon, James

    2009-01-01

    This compilation of trade studies performed from 2005 to 2006 addressed a number of power system design issues for the Constellation Program Extravehicular Activity Spacesuit. Spacesuits were required for spacewalks and in-space activities as well as lunar and Mars surface operations. The trades documented here considered whether solar power was feasible for spacesuits, whether spacesuit power generation should be a distributed or a centralized function, whether self-powered in-space spacesuits were better than umbilically powered ones, and whether the suit power system should be recharged in place or replaced.

  16. Ada (Trade Name) Compiler Validation Summary Report: International Business Machines Corporation. IBM Development System for the Ada Language System, Version 1.1.0, IBM 4381 under VM/SP CMS Host, IBM 4381 under MVS Target

    DTIC Science & Technology

    1988-05-20

    AVF Control Number: AVF-VSR-84.1087 ’S (0 87-03-10-TEL I- Ada® COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM...System, Version 1.1.0, International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under VM/SP CMS, Release 3.6 (host) and IBM 4381...an IBM 4381 operating under MVS, Release 3.8. On-site testing was performed 18 May 1987 through 20 May 1987 at International Business Machines

  17. The embedded operating system project

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1985-01-01

    The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.

  18. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  19. Applying Standard Interfaces to a Process-Control Language

    NASA Technical Reports Server (NTRS)

    Berthold, Richard T.

    2005-01-01

    A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.

  20. HAL/S-360-user's manual

    NASA Technical Reports Server (NTRS)

    Kole, R. E.; Helmers, P. H.; Hotz, R. L.

    1974-01-01

    This is a reference document to be used in the process of getting HAL/S programs compiled and debugged on the IBM 360 computer. Topics from the operating system communication to interpretation of debugging aids are discussed. Features of HAL programming system that have specific system/360 dependencies are presented.

  1. A cognitive operating system (COGNOSYS) for JPL's robot, phase 1 report

    NASA Technical Reports Server (NTRS)

    Mathur, F. P.

    1972-01-01

    The most important software requirement for any robot development is the COGNitive Operating SYStem (COGNOSYS). This report describes the Stanford University Artificial Intelligence Laboratory's hand eye software system from the point of view of developing a cognitive operating system for JPL's robot. In this, the Phase 1 of the JPL robot COGNOSYS task the installation of a SAIL compiler and a FAIL assembler on Caltech's PDP-10 have been accomplished and guidelines have been prepared for the implementation of a Stanford University type hand eye software system on JPL-Caltech's computing facility. The alternatives offered by using RAND-USC's PDP-10 Tenex operating sytem are also considered.

  2. Bibliography On Multiprocessors And Distributed Processing

    NASA Technical Reports Server (NTRS)

    Miya, Eugene N.

    1988-01-01

    Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.

  3. Ground Operations Aerospace Language (GOAL). Volume 2: Compiler

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The principal elements and functions of the Ground Operations Aerospace Language (GOAL) compiler are presented. The technique used to transcribe the syntax diagrams into machine processable format for use by the parsing routines is described. An explanation of the parsing technique used to process GOAL source statements is included. The compiler diagnostics and the output reports generated during a GOAL compilation are explained. A description of the GOAL program package is provided.

  4. Human-computer interaction in distributed supervisory control tasks

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1989-01-01

    An overview of activities concerned with the development and applications of the Operator Function Model (OFM) is presented. The OFM is a mathematical tool to represent operator interaction with predominantly automated space ground control systems. The design and assessment of an intelligent operator aid (OFMspert and Ally) is particularly discussed. The application of OFM to represent the task knowledge in the design of intelligent tutoring systems, designated OFMTutor and ITSSO (Intelligent Tutoring System for Satellite Operators), is also described. Viewgraphs from symposia presentations are compiled along with papers addressing the intent inferencing capabilities of OFMspert, the OFMTutor system, and an overview of intelligent tutoring systems and the implications for complex dynamic systems.

  5. Praxis language reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, J.H.

    1981-01-01

    This document is a language reference manual for the programming language Praxis. The document contains the specifications that must be met by any compiler for the language. The Praxis language was designed for systems programming in real-time process applications. Goals for the language and its implementations are: (1) highly efficient code generated by the compiler; (2) program portability; (3) completeness, that is, all programming requirements can be met by the language without needing an assembler; and (4) separate compilation to aid in design and management of large systems. The language does not provide any facilities for input/output, stack and queuemore » handling, string operations, parallel processing, or coroutine processing. These features can be implemented as routines in the language, using machine-dependent code to take advantage of facilities in the control environment on different machines.« less

  6. Photovoltaic Systems Test Facilities: Existing capabilities compilation

    NASA Technical Reports Server (NTRS)

    Volkmer, K.

    1982-01-01

    A general description of photovoltaic systems test facilities (PV-STFs) operated under the U.S. Department of Energy's photovoltaics program is given. Descriptions of a number of privately operated facilities having test capabilities appropriate to photovoltaic hardware development are given. A summary of specific, representative test capabilities at the system and subsystem level is presented for each listed facility. The range of system and subsystem test capabilities available to serve the needs of both the photovoltaics program and the private sector photovoltaics industry is given.

  7. Laboratory process control using natural language commands from a personal computer

    NASA Technical Reports Server (NTRS)

    Will, Herbert A.; Mackin, Michael A.

    1989-01-01

    PC software is described which provides flexible natural language process control capability with an IBM PC or compatible machine. Hardware requirements include the PC, and suitable hardware interfaces to all controlled devices. Software required includes the Microsoft Disk Operating System (MS-DOS) operating system, a PC-based FORTRAN-77 compiler, and user-written device drivers. Instructions for use of the software are given as well as a description of an application of the system.

  8. 76 FR 3113 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-19

    ... Force's notices for systems of records subject to the Privacy Act of 1974 (5 U.S.C. 552a), as amended... Center; major commands; field operating agencies; Military Personnel Sections at Air Force installations... mailing addresses are published as an appendix to the Air Force's compilation of systems of records...

  9. MCR Container Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, Nicholas Q; Gillen, Robert E; Karnowski, Thomas P

    MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting frommore » them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.« less

  10. Proceedings Papers of the AFSC (Air Force Systems Command) Avionics Standardization Conference (2nd) Held at Dayton, Ohio on 30 November-2 December 1982. Volume 1.

    DTIC Science & Technology

    1982-11-01

    Avionic Systems Integration Facilities, Mark van den Broek 1113 and Paul M. Vicen, AFLC/LOE Planning of Operational Software Implementation Tool...classified as software tools, including: * o" Operating System " Language Processors (compilers, assem’blers, link editors) o Source Editors " Debug Systems ...o Data Base Systems o Utilities o Etc . This talk addresses itself to the current set of tools provided JOVIAL iJ73 1750A application programmners by

  11. Abstract of operations - boats automated reporting system 1.0 : installation and maintenance guide version 1.0 January 1995

    DOT National Transportation Integrated Search

    1995-01-01

    The AOPS Boats system was developed to assist you in compiling your quarterly AOPS data and sending it to Headquarters. An additional component was designed solely to field use to help the station track certification dates by training activities on l...

  12. A Compilation of Boiling Water Reactor Operational Experience for the United Kingdom's Office for Nuclear Regulation's Advanced Boiling Water Reactor Generic Design Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, Timothy A.; Liao, Huafei

    2014-12-01

    United States nuclear power plant Licensee Event Reports (LERs), submitted to the United States Nuclear Regulatory Commission (NRC) under law as required by 10 CFR 50.72 and 50.73 were evaluated for reliance to the United Kingdom’s Health and Safety Executive – Office for Nuclear Regulation’s (ONR) general design assessment of the Advanced Boiling Water Reactor (ABWR) design. An NRC compendium of LERs, compiled by Idaho National Laboratory over the time period January 1, 2000 through March 31, 2014, were sorted by BWR safety system and sorted into two categories: those events leading to a SCRAM, and those events which constitutedmore » a safety system failure. The LERs were then evaluated as to the relevance of the operational experience to the ABWR design.« less

  13. A survey of current operational problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prince, W.R.; Nielsen, E.K.; McNair, H.D.

    1989-11-01

    This paper is prepared for use in the Working Group on Current Operational Problems (COPS) forums with the goal of focusing attention of the industry on problems faced by those who are involved in actual power system operation. The results of a survey on operational problems are presented in this paper. Statistical information compiled for various categories of operational problems is given with some general observations about the results. A rough comparison is made from the results of this survey and the first COPS problem list of 1976.

  14. Common spaceborne multicomputer operating system and development environment

    NASA Technical Reports Server (NTRS)

    Craymer, L. G.; Lewis, B. F.; Hayes, P. J.; Jones, R. L.

    1994-01-01

    A preliminary technical specification for a multicomputer operating system is developed. The operating system is targeted for spaceborne flight missions and provides a broad range of real-time functionality, dynamic remote code-patching capability, and system fault tolerance and long-term survivability features. Dataflow concepts are used for representing application algorithms. Functional features are included to ensure real-time predictability for a class of algorithms which require data-driven execution on an iterative steady state basis. The development environment supports the development of algorithm code, design of control parameters, performance analysis, simulation of real-time dataflow applications, and compiling and downloading of the resulting application.

  15. Scientific and Technical Publishing at Goddard Space Flight Center in Fiscal Year 1994

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This publication is a compilation of scientific and technical material that was researched, written, prepared, and disseminated by the Center's scientists and engineers during FY94. It is presented in numerical order of the GSFC author's sponsoring technical directorate; i.e., Code 300 is the Office of Flight Assurance, Code 400 is the Flight Projects Directorate, Code 500 is the Mission Operations and Data Systems Directorate, Code 600 is the Space Sciences Directorate, Code 700 is the Engineering Directorate, Code 800 is the Suborbital Projects and Operations Directorate, and Code 900 is the Earth Sciences Directorate. The publication database contains publication or presentation title, author(s), document type, sponsor, and organizational code. This is the second annual compilation for the Center.

  16. Summary Report of Journal Operations, 2016.

    PubMed

    2017-01-01

    Presents a summary report of journal operations compiled from the 2016 annual reports of the Council of Editors and from Central Office records. Also includes a summary report of division journal operations compiled from the 2016 annual reports of the division journal editors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clements, Abraham Anthony

    EPOXY is a LLVM base compiler that applies security protections to bare-metal programs on ARM Cortex-M series micro-controllers. This includes privilege overlaying, wherein operations requiring privileged execution are identified and only these operations execute in privileged mode. It also applies code integrity, control-flow hijacking defenses, stack protections, and fine-grained randomization schemes. All of its protections work within the constraints of bare-metal systems.

  18. Programming Languages.

    ERIC Educational Resources Information Center

    Tesler, Lawrence G.

    1984-01-01

    Discusses the nature of programing languages, considering the features of BASIC, LOGO, PASCAL, COBOL, FORTH, APL, and LISP. Also discusses machine/assembly codes, the operation of a compiler, and trends in the evolution of programing languages (including interest in notational systems called object-oriented languages). (JN)

  19. Mission Critical Computer Resources Management Guide

    DTIC Science & Technology

    1988-09-01

    Support Analyzers, Management, Generators Environments Word Workbench Processors Showroom System Structure HO Compilers IMath 1OperatingI Functions I...Simulated Automated, On-Line Generators Support Exercises Catalog, Function Environments Formal Spec Libraries Showroom System Structure I ADA Trackers I...shown in Figure 13-2. In this model, showrooms of larger more capable piecesare developed off-line for later integration and use in multiple systems

  20. Turning a remotely controllable observatory into a fully autonomous system

    NASA Astrophysics Data System (ADS)

    Swindell, Scott; Johnson, Chris; Gabor, Paul; Zareba, Grzegorz; Kubánek, Petr; Prouza, Michael

    2014-08-01

    We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory's 61" Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.

  1. Automatic recognition of vector and parallel operations in a higher level language

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1971-01-01

    A compiler for recognizing statements of a FORTRAN program which are suited for fast execution on a parallel or pipeline machine such as Illiac-4, Star or ASC is described. The technique employs interval analysis to provide flow information to the vector/parallel recognizer. Where profitable the compiler changes scalar variables to subscripted variables. The output of the compiler is an extension to FORTRAN which shows parallel and vector operations explicitly.

  2. Architecture for spacecraft operations planning

    NASA Technical Reports Server (NTRS)

    Davis, William S.

    1991-01-01

    A system which generates plans for the dynamic environment of space operations is discussed. This system synthesizes plans by combining known operations under a set of physical, functional, and temperal constraints from various plan entities, which are modeled independently but combine in a flexible manner to suit dynamic planning needs. This independence allows the generation of a single plan source which can be compiled and applied to a variety of agents. The architecture blends elements of temperal logic, nonlinear planning, and object oriented constraint modeling to achieve its flexibility. This system was applied to the domain of the Intravehicular Activity (IVA) maintenance and repair aboard Space Station Freedom testbed.

  3. Ground Software Maintenance Facility (GSMF) user's manual. Appendices NASA-CR-178806 NAS 1.26:178806 Rept-41849-G159-026-App HC A05/MF A01

    NASA Technical Reports Server (NTRS)

    Aquila, V.; Derrig, D.; Griffith, G.

    1986-01-01

    Procedures are presented that allow the user to assemble tasks, link, compile, backup the system, generate/establish/print display pages, cancel tasks in memory, and to TET an assembly task without having to enter the commands every time. A list of acronyms is provided. Software identification, payload checkout unit operating system services, data base generation, and MITRA operating procedures are also discussed.

  4. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.

  5. Compiling software for a hierarchical distributed processing system

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  6. 22 CFR 171.11 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Privacy Act, E.O. 12958, and the Ethics in Government Act; (e) Record means all information under the... compilation would significantly interfere with the operation of the Department's automated information systems... Relations DEPARTMENT OF STATE ACCESS TO INFORMATION AVAILABILITY OF INFORMATION AND RECORDS TO THE PUBLIC...

  7. Preliminary study for a numerical aerodynamic simulation facility. Phase 1: Extension

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.

    1978-01-01

    Functional requirements and preliminary design data were identified for use in the design of all system components and in the construction of a facility to perform aerodynamic simulation for airframe design. A skeleton structure of specifications for the flow model processor and monitor, the operating system, and the language and its compiler is presented.

  8. Migration of legacy mumps applications to relational database servers.

    PubMed

    O'Kane, K C

    2001-07-01

    An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.

  9. Hybrid Applications Of Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Borchardt, Gary C.

    1988-01-01

    STAR, Simple Tool for Automated Reasoning, is interactive, interpreted programming language for development and operation of artificial-intelligence application systems. Couples symbolic processing with compiled-language functions and data structures. Written in C language and currently available in UNIX version (NPO-16832), and VMS version (NPO-16965).

  10. Summary report of journal operations, 2012.

    PubMed

    2013-01-01

    Presents the summary reports of American Psychological Association journal operations (compiled from the 2012 annual reports of the Council of Editors and from Central Office records) and Division journal operations (compiled from the 2012 annual reports of the Division journal editors). The information provided includes number of manuscripts, printed pages, and print subscriptions per journal. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  11. Continued advancement of the programming language HAL to an operational status

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The continued advancement of the programming language HAL to operational status is reported. It is demonstrated that the compiler itself can be written in HAL. A HAL-in-HAL experiment proves conclusively that HAL can be used successfully as a compiler implementation tool.

  12. Computer programs: Operational and mathematical, a compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.

  13. Integrating Emerging Data Sources into Operational Practice : Opportunities for Integration of Emerging Data for Traffic Management and TMCs.

    DOT National Transportation Integrated Search

    2017-11-01

    With the emergence of data generated from connected vehicles, connected travelers, and connected infrastructure, the capabilities of traffic management systems or centers (TMCs) will need to be improved to allow agencies to compile and benefit from u...

  14. Decentralized System Control.

    DTIC Science & Technology

    1986-04-01

    a Local Area Network Environment. Submitted for Publication. 1982. [Barrnger 791 Barringer . H,. P. C. Capon, and R . Phillips. The Portable Compiling...configuration and hardware. [Chesley 81 Chesley, Harry R . and Bruce V. Hunt., % Squire - A Communications-Oriented Operating System. Computer Networks 5(2...copying the information. Transfers between machines and copying " - r pages as necemry. [Nelson 80] Nelson, Bruce Jay. Remote Procedure Call. PhD Thesis

  15. Approaching mathematical model of the immune network based DNA Strand Displacement system.

    PubMed

    Mardian, Rizki; Sekiyama, Kosuke; Fukuda, Toshio

    2013-12-01

    One biggest obstacle in molecular programming is that there is still no direct method to compile any existed mathematical model into biochemical reaction in order to solve a computational problem. In this paper, the implementation of DNA Strand Displacement system based on nature-inspired computation is observed. By using the Immune Network Theory and Chemical Reaction Network, the compilation of DNA-based operation is defined and the formulation of its mathematical model is derived. Furthermore, the implementation on this system is compared with the conventional implementation by using silicon-based programming. From the obtained results, we can see a positive correlation between both. One possible application from this DNA-based model is for a decision making scheme of intelligent computer or molecular robot. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. A Lithography-Free and Field-Programmable Photonic Metacanvas.

    PubMed

    Dong, Kaichen; Hong, Sukjoon; Deng, Yang; Ma, He; Li, Jiachen; Wang, Xi; Yeo, Junyeob; Wang, Letian; Lou, Shuai; Tom, Kyle B; Liu, Kai; You, Zheng; Wei, Yang; Grigoropoulos, Costas P; Yao, Jie; Wu, Junqiao

    2018-02-01

    The unique correspondence between mathematical operators and photonic elements in wave optics enables quantitative analysis of light manipulation with individual optical devices. Phase-transition materials are able to provide real-time reconfigurability of these devices, which would create new optical functionalities via (re)compilation of photonic operators, as those achieved in other fields such as field-programmable gate arrays (FPGA). Here, by exploiting the hysteretic phase transition of vanadium dioxide, an all-solid, rewritable metacanvas on which nearly arbitrary photonic devices can be rapidly and repeatedly written and erased is presented. The writing is performed with a low-power laser and the entire process stays below 90 °C. Using the metacanvas, dynamic manipulation of optical waves is demonstrated for light propagation, polarization, and reconstruction. The metacanvas supports physical (re)compilation of photonic operators akin to that of FPGA, opening up possibilities where photonic elements can be field programmed to deliver complex, system-level functionalities. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Pick_sw: a program for interactive picking of S-wave data, version 2.00

    USGS Publications Warehouse

    Ellefsen, Karl J.

    2002-01-01

    Program pick_sw is used to interactively pick travel times from S-wave data. It is assumed that the data are collected using 2 shots of opposite polarity at each shot location. The traces must be in either the SEG-2 format or the SU format. The program is written in the IDL and C programming languages, and the program is executed under the Windows operating system. (The program may also execute under other operating systems like UNIX if the C language functions are re-compiled).

  18. Applications catalog of pyrotechnically actuated devices/systems

    NASA Technical Reports Server (NTRS)

    Seeholzer, Thomas L.; Smith, Floyd Z.; Eastwood, Charles W.; Steffes, Paul R.

    1995-01-01

    A compilation of basic information on pyrotechnically actuated devices/systems used in NASA aerospace and aeronautic applications was formatted into a catalog. The intent is to provide (1) a quick reference digest of the types of operational pyro mechanisms and (2) a source of contacts for further details. Data on these items was furnished by the NASA Centers that developed and/or utilized such devices to perform specific functions on spacecraft, launch vehicles, aircraft, and ground support equipment. Information entries include an item title, user center name, commercial contractor/vendor, identifying part number(s), a basic figure, briefly described purpose and operation, previous usage, and operational limits/requirements.

  19. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    PubMed

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  20. STAR- A SIMPLE TOOL FOR AUTOMATED REASONING SUPPORTING HYBRID APPLICATIONS OF ARTIFICIAL INTELLIGENCE (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Borchardt, G. C.

    1994-01-01

    The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.

  1. STAR- A SIMPLE TOOL FOR AUTOMATED REASONING SUPPORTING HYBRID APPLICATIONS OF ARTIFICIAL INTELLIGENCE (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Borchardt, G. C.

    1994-01-01

    The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.

  2. A Proof-Carrying File System

    DTIC Science & Technology

    2009-06-06

    written in Standard ML, and comprises nearly 7,000 lines of code. OpenSSL is used for all cryptographic operations. Because the front end tools are used...be managed. Macrobenchmarks. To understand the performance of PCFS in practice, we also ran two simple macrobenchmarks. The first (called OpenSSL in...the table below), untars the OpenSSL source code, compiles it and deletes it. The other (called Fuse in the table below), performs similar operations

  3. NASA Glenn Steady-State Heat Pipe Code GLENHP: Compilation for 64- and 32-Bit Windows Platforms

    NASA Technical Reports Server (NTRS)

    Tower, Leonard K.; Geng, Steven M.

    2016-01-01

    A new version of the NASA Glenn Steady State Heat Pipe Code, designated "GLENHP," is introduced here. This represents an update to the disk operating system (DOS) version LERCHP reported in NASA/TM-2000-209807. The new code operates on 32- and 64-bit Windows-based platforms from within the 32-bit command prompt window. An additional evaporator boundary condition and other features are provided.

  4. SS/RCS surface tension propellant acquisition/expulsion tankage technology program

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An evaluation of published propellant physical property data together with bubble point tests of fine-mesh screen in propellants, was conducted. The effort consisted of: (1) the collection and evaluation of pertinent physical property data for hydrazine (N2H4), monomethylhydrazine (MMH), and nitrogen tetroxide (N2O4); (2) testing to determine the effect of dissolved pressurant gas, temperature, purity, and system cleanliness or contamination on system bubble point, and (3) the compilation and publishing of both the literature and test results. The space shuttle reaction control system (SS/RCS) is a bipropellant system using N2O4 and MMH, while the auxiliary power system (SS/APU) employs monopropellant N2H4. Since both the RCS and the APU use a surface tension device for propellant acquisition, the propellant properties of interest are those which impact the design and operation of surface tension systems. Information on propellant density, viscosity, surface tension, and contact angle was collected, compiled, and evaluated.

  5. Space station operating system study

    NASA Technical Reports Server (NTRS)

    Horn, Albert E.; Harwell, Morris C.

    1988-01-01

    The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.

  6. ART/Ada design project, phase 1: Project plan

    NASA Technical Reports Server (NTRS)

    Allen, Bradley P.

    1988-01-01

    The plan and schedule for Phase 1 of the Ada based ESBT Design Research Project is described. The main platform for the project is a DEC Ada compiler on VAX mini-computers and VAXstations running the Virtual Memory System (VMS) operating system. The Ada effort and lines of code are given in tabular form. A chart is given of the entire project life cycle.

  7. The development of a multi-target compiler-writing system for flight software development

    NASA Technical Reports Server (NTRS)

    Feyock, S.; Donegan, M. K.

    1977-01-01

    A wide variety of systems designed to assist the user in the task of writing compilers has been developed. A survey of these systems reveals that none is entirely appropriate to the purposes of the MUST project, which involves the compilation of one or at most a small set of higher-order languages to a wide variety of target machines offering little or no software support. This requirement dictates that any compiler writing system employed must provide maximal support in the areas of semantics specification and code generation, the areas in which existing compiler writing systems as well as theoretical underpinnings are weakest. This paper describes an ongoing research and development effort to create a compiler writing system which will overcome these difficulties, thus providing a software system which makes possible the fast, trouble-free creation of reliable compilers for a wide variety of target computers.

  8. Real-Time, General-Purpose, High-Speed Signal Processing Systems for Underwater Research. Proceedings of a Working Level Conference held at Supreme Allied Commander, Atlantic Anti-Submarine Warfare Research Center (SACLANTCEN) on 18-21 September 1979. Part 2. Sessions IV to VI.

    DTIC Science & Technology

    1979-12-01

    ACTIVATED, SYSTEM OPERATION AND TESTING MASCOT PROVIDES: 1. SYSTEM BUILD SOFTWARE COMPILE-TIME CHECKS,a. 2. RUN-TIME SUPERVISOR KERNEL, 3, MONITOR AND...p AD-AOBI 851 SACLANT ASW RESEARCH CENTRE LA SPEZIA 11ITALY) F/B 1711 REAL-TIME, GENERAL-PURPOSE, HIGH-SPEED SIGNAL PROCESSING SYSTEM -- ETC (U) DEC 79...Table of Contents Table of Contents (Cont’d) Page Signal processing language and operating system (w) 23-1 to 23-12 by S. Weinstein A modular signal

  9. A new model for programming software in body sensor networks.

    PubMed

    de A Barbosa, Talles M G; Sene, Iwens G; da Rocha, Adson F; de O Nascimento, Francisco A A; Carvalho, Joao L A; Carvalho, Hervaldo S

    2007-01-01

    A Body Sensor Network (BSN) must be designed to work autonomously. On the other hand, BSNs need mechanisms that allow changes in their behavior in order to become a clinically useful tool. The purpose of this paper is to present a new programming model that will be useful for programming BSN sensor nodes. This model is based on an intelligent intermediate-level compiler. The main purpose of the proposed compiler is to increase the efficiency in system use, and to increase the lifetime of the application, considering its requirements, hardware possibilities and specialist knowledge. With this model, it is possible to maintain the autonomous operation capability of the BSN and still offer tools that allow users with little grasp on programming techniques to program these systems.

  10. A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram

    This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less

  11. Fast interrupt platform for extended DOS

    NASA Technical Reports Server (NTRS)

    Duryea, T. W.

    1995-01-01

    Extended DOS offers the unique combination of a simple operating system which allows direct access to the interrupt tables, 32 bit protected mode access to 4096 MByte address space, and the use of industry standard C compilers. The drawback is that fast interrupt handling requires both 32 bit and 16 bit versions of each real-time process interrupt handler to avoid mode switches on the interrupts. A set of tools has been developed which automates the process of transforming the output of a standard 32 bit C compiler to 16 bit interrupt code which directly handles the real mode interrupts. The entire process compiles one set of source code via a make file, which boosts productivity by making the management of the compile-link cycle very simple. The software components are in the form of classes written mostly in C. A foreground process written as a conventional application which can use the standard C libraries can communicate with the background real-time classes via a message passing mechanism. The platform thus enables the integration of high performance real-time processing into a conventional application framework.

  12. NASA Conference on Aircraft Operating Problems: A Compilation of the Papers Presented

    NASA Technical Reports Server (NTRS)

    1965-01-01

    This compilation includes papers presented at the NASA Conference on Aircraft Operating Problems held at the Langley Research Center on May 10 - 12, 1965. Contributions were made by representatives of the Ames Research Center, the Flight Research Center, end the Langley Research Center of NASA, as well as by representatives of the Federal Aviation Agency.

  13. A Regional, Integrated Monitoring System for the Hydrology of the Pan-Arctic Land Mass

    NASA Technical Reports Server (NTRS)

    Serreze, Mark; Barry, Roger; Nolin, Anne; Armstrong, Richard; Zhang, Ting-Jung; Vorosmarty, Charles; Lammers, Richard; Frolking, Steven; Bromwich, David; McDonald, Kyle

    2005-01-01

    Work under this NASA contract developed a system for monitoring and historical analysis of the major components of the pan-Arctic terrestrial water cycle. It is known as Arctic-RIMS (Regional Integrated Hydrological Monitoring System for the Pan-Arctic Landmass). The system uses products from EOS-era satellites, numerical weather prediction models, station records and other data sets in conjunction with an atmosphere-land surface water budgeting scheme. The intent was to compile operational (at 1-2 month time lags) gridded fields of precipitation (P), evapotranspiration (ET), P-ET, soil moisture, soil freeze/thaw state, active layer thickness, snow extent and its water equivalent, soil water storage, runoff and simulated discharge along with estimates of non-closure in the water budget. Using "baseline" water budgeting schemes in conjunction with atmospheric reanalyses and pre-EOS satellite data, water budget fields were conjunction with atmospheric reanalyses and pre-EOS satellite data, water budget fields were compiled to provide historical time series. The goals as outlined in the original proposal can be summarized as follows: 1) Use EOS data to compile hydrologic products for the pan-Arctic terrestrial regions including snowcover/snow water equivalent (SSM/A MODIS, AMSR) and near-surface freeze/thaw dynamics (Sea Winds on QuikSCAT and ADEOS I4 SSMI and AMSR). 2) Implement Arctic-RIMS to use EOS data streams, allied fields and hydrologic models to produce allied outputs that fully characterize pan-Arctic terrestrial and aerological water budgets. 3) Compile hydrologically-based historical products providing a long-term baseline of spatial and temporal variability in the water cycle.

  14. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  15. Wave Engine Technology Development

    DTIC Science & Technology

    1984-01-01

    were the usual minor but time consuming problems of converting a program to run on a new computer with a new operating system and Fortran compiler...Exit Port. - - I _ _- i - - ~ = _ _ o71 - .. (I 00 kfC ) C: 4 03 \\. ft~ d) Ix- 3:- 0r i lzz 𔃾 14- Wave Field 81 and the associated port printouts are

  16. Fluid technology (selected components, devices, and systems): A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Developments in fluid technology and hydraulic equipment are presented. The subjects considered are: (1) the use of fluids in the operation of switches, amplifiers, and servo devices, (2) devices and data for laboratory use in the study of fluid dynamics, and (3) the use of fluids as controls and certain methods of controlling fluids.

  17. Development of human factors guidelines for advanced traveler information systems and commercial vehicle operations : definition and prioritization of research studies

    DOT National Transportation Integrated Search

    1997-03-01

    The goal of the activities documented in this report was to produce a prioritized list of candidate studies and issues that would guide data acquisition in this project. This goal was accomplished in three steps. First, 91 issues were compiled from e...

  18. Tiled architecture of a CNN-mostly IP system

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    2009-05-01

    Multi-core architectures have been popularized with the advent of the IBM CELL. On a finer grain the problems in scheduling multi-cores have already existed in the tiled architectures, such as the EPIC and Da Vinci. It is not easy to evaluate the performance of a schedule on such architecture as historical data are not available. One solution is to compile algorithms for which an optimal schedule is known by analysis. A typical example is an algorithm that is already defined in terms of many collaborating simple nodes, such as a Cellular Neural Network (CNN). A simple node with a local register stack together with a 'rotating wheel' internal communication mechanism has been proposed. Though the basic CNN allows for a tiled implementation of a tiled algorithm on a tiled structure, a practical CNN system will have to disturb this regularity by the additional need for arithmetical and logical operations. Arithmetic operations are needed for instance to accommodate for low-level image processing, while logical operations are needed to fork and merge different data streams without use of the external memory. It is found that the 'rotating wheel' internal communication mechanism still handles such mechanisms without the need for global control. Overall the CNN system provides for a practical network size as implemented on a FPGA, can be easily used as embedded IP and provides a clear benchmark for a multi-core compiler.

  19. Marionette

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, M.; Anderson, D.P.

    1988-01-01

    Marionette is a system for distributed parallel programming in an environment of networked heterogeneous computer systems. It is based on a master/slave model. The master process can invoke worker operations (asynchronous remote procedure calls to single slaves) and context operations (updates to the state of all slaves). The master and slaves also interact through shared data structures that can be modified only by the master. The master and slave processes are programmed in a sequential language. The Marionette runtime system manages slave process creation, propagates shared data structures to slaves as needed, queues and dispatches worker and context operations, andmore » manages recovery from slave processor failures. The Marionette system also includes tools for automated compilation of program binaries for multiple architectures, and for distributing binaries to remote fuel systems. A UNIX-based implementation of Marionette is described.« less

  20. Memory management and compiler support for rapid recovery from failures in computer systems

    NASA Technical Reports Server (NTRS)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  1. Snowflake: A Lightweight Portable Stencil DSL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Nathan; Driscoll, Michael; Markley, Charles

    Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less

  2. Snowflake: A Lightweight Portable Stencil DSL

    DOE PAGES

    Zhang, Nathan; Driscoll, Michael; Markley, Charles; ...

    2017-05-01

    Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less

  3. Lewis hybrid computing system, users manual

    NASA Technical Reports Server (NTRS)

    Bruton, W. M.; Cwynar, D. S.

    1979-01-01

    The Lewis Research Center's Hybrid Simulation Lab contains a collection of analog, digital, and hybrid (combined analog and digital) computing equipment suitable for the dynamic simulation and analysis of complex systems. This report is intended as a guide to users of these computing systems. The report describes the available equipment' and outlines procedures for its use. Particular is given to the operation of the PACER 100 digital processor. System software to accomplish the usual digital tasks such as compiling, editing, etc. and Lewis-developed special purpose software are described.

  4. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  5. The PASM Parallel Processing System: Hardware Design and Intelligent Operating System Concepts

    DTIC Science & Technology

    1986-07-01

    IND-3 Jac Logic 0ISCAUTO-3 UK Jus Parallel IrAorf act Pori 90-7 el MS. IND-3 P110-3 Logic = .CUTO-3 AC-4 0 Sow PAIS WK.110-7 --------- CSS CC. THO...process communication are part of the ment, which must be part of the task body: jitsu VP-20043 uses 32-bit integers. Pro- language. The compiler actually

  6. On Fusing Recursive Traversals of K-d Trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram

    Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less

  7. Operating experience with a VMEbus multiprocessor system for data acquisition and reduction in nuclear physics

    NASA Astrophysics Data System (ADS)

    Kutt, P. H.; Balamuth, D. P.

    1989-10-01

    Summary form only given, as follows. A multiprocessor system based on commercially available VMEbus components has been developed for the acquisition and reduction of event-mode data in nuclear physics experiments. The system contains seven 68000 CPUs and 14 Mbyte of memory. A minimal operating system handles data transfer and task allocation, and a compiler for a specially designed event analysis language produces code for the processors. The system has been in operation for four years at the University of Pennsylvania Tandem Accelerator Laboratory. Computation rates over three times that of a MicroVAX II have been achieved at a fraction of the cost. The use of WORM optical disks for event recording allows the processing of gigabyte data sets without operator intervention. A more powerful system is being planned which will make use of recently developed RISC (reduced instruction set computer) processors to obtain an order of magnitude increase in computing power per node.

  8. Space Station Technology, 1983

    NASA Technical Reports Server (NTRS)

    Wright, R. L. (Editor); Mays, C. R. (Editor)

    1984-01-01

    This publication is a compilation of the panel summaries presented in the following areas: systems/operations technology; crew and life support; EVA; crew and life support: ECLSS; attitude, control, and stabilization; human capabilities; auxillary propulsion; fluid management; communications; structures and mechanisms; data management; power; and thermal control. The objective of the workshop was to aid the Space Station Technology Steering Committee in defining and implementing a technology development program to support the establishment of a permanent human presence in space. This compilation will provide the participants and their organizations with the information presented at this workshop in a referenceable format. This information will establish a stepping stone for users of space station technology to develop new technology and plan future tasks.

  9. Cray Research, Inc. Cray 1-s, Cray FORTRAN translator CFT) version 1. 11 Bugfix 1. Validation summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1983-09-09

    This Validation Summary Report (VSR) for the Cray Research, Inc., CRAY FORTRAN Translator (CFT) Version 1.11 Bugfix 1 running under the CRAY Operating System (COS) Version 1.12 provides a consolidated summary of the results obtained from the validation of the subject compiler against the 1978 FORTRAN Standard (X3.9-1978/FIPS PUB 69). The compiler was validated against the Full Level FORTRAN level of FIPS PUB 69. The VSR is made up of several sections showing all the discrepancies found -if any. These include an overview of the validation which lists all categories of discrepancies together with the tests which failed.

  10. Study of application of space telescope science operations software for SIRTF use

    NASA Technical Reports Server (NTRS)

    Dignam, F.; Stetson, E.; Allendoerfer, W.

    1985-01-01

    The design and development of the Space Telescope Science Operations Ground System (ST SOGS) was evaluated to compile a history of lessons learned that would benefit NASA's Space Infrared Telescope Facility (SIRTF). Forty-nine specific recommendations resulted and were categorized as follows: (1) requirements: a discussion of the content, timeliness and proper allocation of the system and segment requirements and the resulting impact on SOGS development; (2) science instruments: a consideration of the impact of the Science Instrument design and data streams on SOGS software; and (3) contract phasing: an analysis of the impact of beginning the various ST program segments at different times. Approximately half of the software design and source code might be useable for SIRTF. Transportability of this software requires, at minimum, a compatible DEC VAX-based architecture and VMS operating system, system support software similar to that developed for SOGS, and continued evolution of the SIRTF operations concept and requirements such that they remain compatible with ST SOGS operation.

  11. Advancing HAL to an operational status

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The development of the HAL language and the compiler implementation of the mathematical subset of the language have been completed. On-site support, training, and maintenance of this compiler were enlarged to broaden the implementation of HAL to include all features of the language specification for NASA manned space usage. A summary of activities associated with the HAL compiler for the UNIVAC 1108 is given.

  12. Tera-Op Reliable Intelligently Adaptive Processing System (TRIPS)

    DTIC Science & Technology

    2004-04-01

    flop creates a loadable FIFO queue, fifo pload. A prototype of the HML simulator is implemented using a functional language OCaml . The language type...Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 7.1.2 Hardware Meta Language ...operates on the TRIPS Intermediate Language (TIL) produced by the Scale compiler. We also adapted the gnu binary utilities to implement an assembler and

  13. Adaptation Patterns as a Conceptual Tool for Designing the Adaptive Operation of CSCL Systems

    ERIC Educational Resources Information Center

    Karakostas, Anastasios; Demetriadis, Stavros

    2011-01-01

    While adaptive collaboration support has become the focus of increasingly intense research efforts in the CSCL domain, scarce, however, remain the research-based evidence on pedagogically useful ideas on what and how to adapt during the collaborative learning activity. Based principally on two studies, this work presents a compilation of…

  14. Operations analysis (study 2.1). Program listing for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1974-01-01

    A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.

  15. IUS/TUG orbital operations and mission support study. Volume 2: Interim upper stage operations

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Background data and study results are presented for the interim upper stage (IUS) operations phase of the IUS/tug orbital operations study. The study was conducted to develop IUS operational concepts and an IUS baseline operations plan, and to provide cost estimates for IUS operations. The approach used was to compile and evaluate baseline concepts, definitions, and system, and to use that data as a basis for the IUS operations phase definition, analysis, and costing analysis. Both expendable and reusable IUS configurations were analyzed and two autonomy levels were specified for each configuration. Topics discussed include on-orbit operations and interfaces with the orbiter, the tracking and data relay satellites and ground station support capability analysis, and flight control center sizing to support the IUS operations.

  16. The SIFT hardware/software systems. Volume 2: Software listings

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    1985-01-01

    This document contains software listings of the SIFT operating system and application software. The software is coded for the most part in a variant of the Pascal language, Pascal*. Pascal* is a cross-compiler running on the VAX and Eclipse computers. The output of Pascal* is BDX-390 assembler code. When necessary, modules are written directly in BDX-390 assembler code. The listings in this document supplement the description of the SIFT system found in Volume 1 of this report, A Detailed Description.

  17. Ada technology support for NASA-GSFC

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Utilization of the Ada programming language and environments to perform directorate functions was reviewed. The Mission and Data Operations Directorate Network (MNET) conversion effort was chosen as the first task for evaluation and assistance. The MNET project required the rewriting of the existing Network Control Program (NCP) in the Ada programming language. The DEC Ada compiler running on the VAX under WMS was used for the initial development efforts. Stress tests on the newly delivered version of the DEC Ada compiler were performed. The new Alsys Ada compiler was purchased for the IBM PC AT. A prevalidated version of the compiler was obtained. The compiler was then validated.

  18. Ada Compiler Validation Summary Report: Certificate Number: 890420W1. 10066 International Business Machines Corporation, IBM Development System for the Ada Language, AIX/RT Ada Compiler, Version 1.1.1, IBM RT PC 6150-125

    DTIC Science & Technology

    1989-04-20

    International Business Machines Corporation, IBM Development System. for the Ada Language AIX/RT Ada Compiler, Version 1.1.1, Wright-Patterson APB...Certificate Number: 890420V1.10066 International Business Machines Corporation IBM Development System for the Ada Language AIX/RT Ada Compiler, Version 1.1.1...TEST INFORMATION The compiler was tested using command scripts provided by International Business Machines Corporation and reviewed by the validation

  19. Columbia River Component Data Evaluation Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C.S. Cearlock

    2006-08-02

    The purpose of the Columbia River Component Data Compilation and Evaluation task was to compile, review, and evaluate existing information for constituents that may have been released to the Columbia River due to Hanford Site operations. Through this effort an extensive compilation of information pertaining to Hanford Site-related contaminants released to the Columbia River has been completed for almost 965 km of the river.

  20. Maintenance and Operations and the School Business Administrator: A Compilation of Articles from "School Business Affairs." The Professional Development Series.

    ERIC Educational Resources Information Center

    Association of School Business Officials International, Reston, VA.

    Fourteen million students attend schools needing extensive repair or remodeling. It is estimated that U.S. schools will require as much as $112 billion to bring them up to a good overall condition and an additional $12 billion to comply with federal mandates. This book compiles what is considered the best maintenance and operations articles that…

  1. STAR (Simple Tool for Automated Reasoning): Tutorial guide and reference manual

    NASA Technical Reports Server (NTRS)

    Borchardt, G. C.

    1985-01-01

    STAR is an interactive, interpreted programming language for the development and operation of Artificial Intelligence application systems. The language is intended for use primarily in the development of software application systems which rely on a combination of symbolic processing, central to the vast majority of AI algorithms, with routines and data structures defined in compiled languages such as C, FORTRAN and PASCAL. References to routines and data structures defined in compiled languages are intermixed with symbolic structures in STAR, resulting in a hybrid operating environment in which symbolic and non-symbolic processing and organization of data may interact to a high degree within the execution of particular application systems. The STAR language was developed in the course of a project involving AI techniques in the interpretation of imaging spectrometer data and is derived in part from a previous language called CLIP. The interpreter for STAR is implemented as a program defined in the language C and has been made available for distribution in source code form through NASA's Computer Software Management and Information Center (COSMIC). Contained within this report are the STAR Tutorial Guide, which introduces the language in a step-by-step manner, and the STAR Reference Manual, which provides a detailed summary of the features of STAR.

  2. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  3. ALCHEMIST (Anesthesia Log, Charge Entry, Medical Information, and Statistics)

    PubMed Central

    Covey, M. Carl

    1979-01-01

    This paper presents an automated system for the handling of charges and information processing within the Anesthesiology department of the University of Arkansas for the Medical Sciences (UAMS). The purpose of the system is to take the place of cumbersome, manual billing procedures and in the process of automated charge generation, to compile a data base of patient data for later use. ALCHEMIST has demonstrated its value by increasing both the speed and the accuracy of generation of patient charges as well as facilitating the compilation of valuable, informative reports containing statistical summaries of all aspects of the UAMS operating wing case load. ALCHEMIST allows for the entry of fifty different sets of information (multiple items in some sets) for a total of 107 separate data elements from the original anesthetic record. All this data is entered as part of the charge entry procedure.

  4. The Fault Tree Compiler (FTC): Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1989-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.

  5. Compiling quantum circuits to realistic hardware architectures using temporal planners

    NASA Astrophysics Data System (ADS)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  6. Southern California Daily Energy Report

    EIA Publications

    2016-01-01

    EIA has updated its Southern California Daily Energy Report to provide additional information on key energy market indicators for the winter season. The dashboard includes information that EIA regularly compiles about energy operations and the management of natural gas and electricity systems in Southern California in the aftermath of a leak at the Aliso Canyon natural gas storage facility outside of Los Angeles

  7. Summary Report on NRL Participation in the Microwave Landing System Program.

    DTIC Science & Technology

    1980-08-19

    shifters were measured and statistically analyzed. Several research contracts for promising phased array techniques were awarded to industrial contractors...program was written for compiling statistical data on the measurements, which reads out inser- sertion phase characteristics and standard deviation...GLOSSARY OF TERMS ALPA Airline Pilots’ Association ATA Air Transport Association AWA Australiasian Wireless Amalgamated AWOP All-weather Operations

  8. Compiling probabilistic, bio-inspired circuits on a field programmable analog array

    PubMed Central

    Marr, Bo; Hasler, Jennifer

    2014-01-01

    A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199

  9. PV System Component Fault and Failure Compilation and Analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne

    This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.

  10. Descriptions of selected digital spatial data for Ravenna Army Ammunition Plant, Ohio

    USGS Publications Warehouse

    Schalk, C.W.; Darner, R.A.

    1998-01-01

    Digital spatial data of Ravenna Army Ammunition Plant (RVAAP), in northeastern Ohio, were compiled or generated from existing maps for U.S. Army Industrial Operations Command. The data are in the Ohio north state-plane coordinate system (North American Datum of 1983) in an ARC/INFO geographic information system format. The data comprise 15 layers, which include boundaries, topography, and natural and cultural features. An additional layer comprises scanned and rectified aerial photographs of RVAAP.

  11. Compilation of 1986 annual reports of the Navy ELF (Extremely Low Frequency) communications system ecological monitoring program, volume 2

    NASA Astrophysics Data System (ADS)

    1987-07-01

    The U.S. Navy is conducting a long-term program to monitor for possible effects from the operation of its Extremely Low Frequency (ELF) Communications System to resident biota and their ecological relationships. This report documents progress of the following studies: soil amoeba; soil and litter arthropoda and earthworm studies; biological studies on pollinating insects: megachilid bees; and small vertebrates: small mammals and nesting birds.

  12. Current Methods for Evaluation of Physical Security System Effectiveness.

    DTIC Science & Technology

    1981-05-01

    It also helps the user modify a data set before further processing. (c) Safeguards Engineering and Analysis Data Base (SEAD)--To complete SAFE’s...graphic display software in addition to a Fortran compiler, and up to about (3 35,000 words of storage. For a fairly complex problem, a single run through...operational software . 94 BIBLIOGRAPHY Lenz, J.E., "The PROSE (Protection System Evaluator) Model," Proc. 1979 Winter Simulation Conference, IEEE, 1979

  13. Fault diagnosis in orbital refueling operations

    NASA Technical Reports Server (NTRS)

    Boy, Guy A.

    1988-01-01

    Usually, operation manuals are provided for helping astronauts during space operations. These manuals include normal and malfunction procedures. Transferring operation manual knowledge into a computerized form is not a trivial task. This knowledge is generally written by designers or operation engineers and is often quite different from the user logic. The latter is usually a compiled version of the former. Experiments are in progress to assess the user logic. HORSES (Human - Orbital Refueling System - Expert System) is an attempt to include both of these logics in the same tool. It is designed to assist astronauts during monitoring and diagnosis tasks. Basically, HORSES includes a situation recognition level coupled to an analytical diagnoser, and a meta-level working on both of the previous levels. HORSES is a good tool for modeling task models and is also more broadly useful for knowledge design. The presentation is represented by abstract and overhead visuals only.

  14. Ada Compiler Validation Summary Report: Certificate Number 89020W1. 10073: International Business Machines Corporation, IBM Development System for the Ada Language, VM/CMS Ada Compiler, Version 2.1.1, IBM 3083 (Host and Target)

    DTIC Science & Technology

    1989-04-20

    International Business Machines Corporation) IBM Development System for the Ada Language, VN11/CMS Ada Compiler, Version 2.1.1, Wright-Patterson AFB, IBM 3083...890420W1.10073 International Business Machines Corporation IBM Development System for the Ada Language VM/CMS Ada Compiler Version 2.1.1 IBM 3083... International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default option settings except for the

  15. Program package for multicanonical simulations of U(1) lattice gauge theory-Second version

    NASA Astrophysics Data System (ADS)

    Bazavov, Alexei; Berg, Bernd A.

    2013-03-01

    A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.

  16. Ada Compiler Validation Summary Report: Certificate Number: 940325S1. 11352 DDC-I DACS Sun SPARC/Solaries to Pentium PM Bare Ada Cross Compiler System, Version 4.6.4 Sun SPARCclassic = Intel Pentium (Operated as Bare Machine) Based in Xpress Desktop (Intel Product Number: XBASE6E4F-B)

    DTIC Science & Technology

    1994-03-25

    Technology Building 225, Room A266 Gait•--eburg, Maryland 20899 U.S.A. Ada Von Ogan~ztionAda Jointt Program Office De & Software David R . Basel...Standards and Technology Building 225, Room A266 Gaithersburg, Maryland 20899 U.S.A. azi Ada Joint Program office Directoz’,’Coputer & Softvare David R ...characters, a bar (" r ) is written in the 16th position and the rest of the characters ame not prined. "* The place of the definition, i.e.. a line

  17. Pioneer unmanned air vehicle accomplishments during Operation Desert Storm

    NASA Astrophysics Data System (ADS)

    Christner, James H.

    1991-12-01

    This paper will describe the accomplishments and lessons learned of the Pioneer Unmanned Air Vehicle (UAV) during Operations Desert Shield and Desert Storm. The Pioneer UAV has been deployed with three branches of the U.S. military (USA, USN, and USMC) for the past four years. Although the system has compiled over 6,000 flight hours, the recent conflict in the Gulf is the first opportunity to demonstrate its true value in a combat scenario. In a relatively short time (42 days), 307 flights and 1,011 flight hours were completed on Operation Desert Storm. This, coupled with the accuracy of various weapons systems that Pioneer observed/cued for, resulted in timely target engagements. This paper will chronicle the Pioneer deployment and accomplishments on Operations Desert Shield and Desert Storm. Various employment methods, tactics, doctrine, and lessons learned will be presented.

  18. Application of the CCT system and its effects on the works of compilations and publications.

    NASA Astrophysics Data System (ADS)

    Shu, Sizhu

    The present information of the compilation and composition with the microcomputer at Shanghai Observatory were introduced, in which the applications of the CCT system on the compilation and composition were also presented. The effects of the composition with the microcomputer on the works of compilations and publications in recent years were discussed.

  19. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  20. Models of unit operations used for solid-waste processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savage, G.M.; Glaub, J.C.; Diaz, L.F.

    1984-09-01

    This report documents the unit operations models that have been developed for typical refuse-derived-fuel (RDF) processing systems. These models, which represent the mass balances, energy requirements, and economics of the unit operations, are derived, where possible, from basic principles. Empiricism has been invoked where a governing theory has yet to be developed. Field test data and manufacturers' information, where available, supplement the analytical development of the models. A literature review has also been included for the purpose of compiling and discussing in one document the available information pertaining to the modeling of front-end unit operations. Separate analytics have been donemore » for each task.« less

  1. Operational Concept for the NASA Constellation Program's Ares I Crew Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Best, Joel; Chavers, Greg; Richardson, Lea; Cruzen, Craig

    2008-01-01

    Ares I design brings together innovation and new technologies with established infrastructure and proven heritage hardware to achieve safe, reliable, and affordable human access to space. NASA has 50 years of experience from Apollo and Space Shuttle. The Marshall Space Flight Center's Mission Operations Laboratory is leading an operability benchmarking effort to compile operations and supportability lessons learned from large launch vehicle systems, both domestically and internationally. Ares V will be maturing as the Shuttle is retired and the Ares I design enters the production phase. More details on the Ares I and Ares V will be presented at SpaceOps 2010 in Huntsville, Alabama, U.S.A., April 2010.

  2. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  3. The Telecommunications and Data Acquisition Report

    NASA Technical Reports Server (NTRS)

    Posner, Edward C. (Editor)

    1991-01-01

    A compilation is presented of articles on developments in programs managed by JPL's Office of Telecommunications and Data Acquisition. In space communications, radio navigation, radio science, and ground based radio and radar astronomy, activities of the Deep Space Network are reported in planning, in supporting research and technology, in implementation, and in operations. Also included is standards activity at JPL for space data and information systems and reimbursable DSN work performed for other space agencies through NASA. In the search for extraterrestrial intelligence (SETI), implementation and operations are reported for searching the microwave spectrum.

  4. Minimization In Digital Design As A Meta-Planning Problem

    NASA Astrophysics Data System (ADS)

    Ho, William P. C.; Wu, Jung-Gen

    1987-05-01

    In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.

  5. Development of the Tensoral Computer Language

    NASA Technical Reports Server (NTRS)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

  6. Improved programs for DNA and protein sequence analysis on the IBM personal computer and other standard computer systems.

    PubMed Central

    Mount, D W; Conrad, B

    1986-01-01

    We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780

  7. Cross-Compiler for Modeling Space-Flight Systems

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    Ripples is a computer program that makes it possible to specify arbitrarily complex space-flight systems in an easy-to-learn, high-level programming language and to have the specification automatically translated into LibSim, which is a text-based computing language in which such simulations are implemented. LibSim is a very powerful simulation language, but learning it takes considerable time, and it requires that models of systems and their components be described at a very low level of abstraction. To construct a model in LibSim, it is necessary to go through a time-consuming process that includes modeling each subsystem, including defining its fault-injection states, input and output conditions, and the topology of its connections to other subsystems. Ripples makes it possible to describe the same models at a much higher level of abstraction, thereby enabling the user to build models faster and with fewer errors. Ripples can be executed in a variety of computers and operating systems, and can be supplied in either source code or binary form. It must be run in conjunction with a Lisp compiler.

  8. C++QEDv2: The multi-array concept and compile-time algorithms in the definition of composite quantum systems

    NASA Astrophysics Data System (ADS)

    Vukics, András

    2012-06-01

    C++QED is a versatile framework for simulating open quantum dynamics. It allows to build arbitrarily complex quantum systems from elementary free subsystems and interactions, and simulate their time evolution with the available time-evolution drivers. Through this framework, we introduce a design which should be generic for high-level representations of composite quantum systems. It relies heavily on the object-oriented and generic programming paradigms on one hand, and on the other hand, compile-time algorithms, in particular C++ template-metaprogramming techniques. The core of the design is the data structure which represents the state vectors of composite quantum systems. This data structure models the multi-array concept. The use of template metaprogramming is not only crucial to the design, but with its use all computations pertaining to the layout of the simulated system can be shifted to compile time, hence cutting on runtime. Program summaryProgram title: C++QED Catalogue identifier: AELU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:http://cpc.cs.qub.ac.uk/licence/aelu_v1_0.html. The C++QED package contains other software packages, Blitz, Boost and FLENS, all of which may be distributed freely but have individual license requirements. Please see individual packages for license conditions. No. of lines in distributed program, including test data, etc.: 597 974 No. of bytes in distributed program, including test data, etc.: 4 874 839 Distribution format: tar.gz Programming language: C++ Computer: i386-i686, x86_64 Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60 MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1 MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2, 20 External routines: Boost C++ libraries (http://www.boost.org/), GNU Scientific Library (http://www.gnu.org/software/gsl/), Blitz++ (http://www.oonumerics.org/blitz/), Linear Algebra Package - Flexible Library for Efficient Numerical Solutions (http://flens.sourceforge.net/). Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [1]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [2] and Monte Carlo wave-function simulation [3]. Solution method: Master equation, Monte Carlo wave-function method. Restrictions: Total dimensionality of the system. Master equation - few thousands. Monte Carlo wave-function trajectory - several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. Supplementary information: http://cppqed.sourceforge.net/. Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks.

  9. Compilation of 1986 annual reports of the Navy ELF (extremely low frequency) communications system ecological-monitoring program. Volume 2. Tabs D-G. Annual progress report, January-December 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-07-01

    The U.S. Navy is conducting a long-term program to monitor for possible effects from the operation of its Extremely Low Frequency (ELF) Communications System to resident biota and their ecological relationships. This report documents progress of the following studies: Soil Amoeba; Soil and Litter Arthropoda and Earthworm Studies; Biological Studies on Pollinating insects: Megachilid Bees; and Small Vertebrates: Small Mammals and Nesting Birds.

  10. Pipeline Optimization Program (PLOP)

    DTIC Science & Technology

    2006-08-01

    the framework of the Dredging Operations Decision Support System (DODSS, https://dodss.wes.army.mil/wiki/0). PLOP compiles industry standards and...efficiency point ( BEP ). In the interest of acceptable wear rate on the pump, industrial standards dictate that the flow Figure 2. Pump class as a function of...percentage of the flow rate corresponding to the BEP . Pump Acceptability Rules. The facts for pump performance, industrial standards and pipeline and

  11. The PR2D (Place, Route in 2-Dimensions) automatic layout computer program handbook

    NASA Technical Reports Server (NTRS)

    Edge, T. M.

    1978-01-01

    Place, Route in 2-Dimensions is a standard cell automatic layout computer program for generating large scale integrated/metal oxide semiconductor arrays. The program was utilized successfully for a number of years in both government and private sectors but until now was undocumented. The compilation, loading, and execution of the program on a Sigma V CP-V operating system is described.

  12. Automated storage and retrieval of data obtained in the Interkosmos project

    NASA Technical Reports Server (NTRS)

    Ziolkovski, K.; Pakholski, V.

    1975-01-01

    The formation of a data bank and information retrieval system for scientific data is described. The stored data can be digital or documentation data. Data classification methods are discussed along with definition and compilation of the dictionary utilized, definition of the indexing scheme, and definition of the principles used in constructing a file for documents, data blocks, and tapes. Operating principles are also presented.

  13. A translator writing system for microcomputer high-level languages and assemblers

    NASA Technical Reports Server (NTRS)

    Collins, W. R.; Knight, J. C.; Noonan, R. E.

    1980-01-01

    In order to implement high level languages whenever possible, a translator writing system of advanced design was developed. It is intended for routine production use by many programmers working on different projects. As well as a fairly conventional parser generator, it includes a system for the rapid generation of table driven code generators. The parser generator was developed from a prototype version. The translator writing system includes various tools for the management of the source text of a compiler under construction. In addition, it supplies various default source code sections so that its output is always compilable and executable. The system thereby encourages iterative enhancement as a development methodology by ensuring an executable program from the earliest stages of a compiler development project. The translator writing system includes PASCAL/48 compiler, three assemblers, and two compilers for a subset of HAL/S.

  14. Ada (Trade Name) Compiler Validation Summary Report. Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H1200 and H800.

    DTIC Science & Technology

    1987-04-30

    AiBI 895 ADA (TRADENNANE) COMPILER VALIDATION SUMMARY REPORT / HARRIS CORPORATION HA (U) INFORMATION SYSTEMS AND TECHNOLOGY CENTER W-P AFS OH ADA...Compiler Validation Summary Report : 30 APR 1986 to 30 APR 1987 Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H1200 and H800 6...the United States Government (Ada Joint Program Office). Adae Compiler Validation mary Report : Compiler Name: HARRIS Ada Compiler, Version 1.0 1 Host

  15. Interactive display of molecular models using a microcomputer system

    NASA Technical Reports Server (NTRS)

    Egan, J. T.; Macelroy, R. D.

    1980-01-01

    A simple, microcomputer-based, interactive graphics display system has been developed for the presentation of perspective views of wire frame molecular models. The display system is based on a TERAK 8510a graphics computer system with a display unit consisting of microprocessor, television display and keyboard subsystems. The operating system includes a screen editor, file manager, PASCAL and BASIC compilers and command options for linking and executing programs. The graphics program, written in USCD PASCAL, involves the centering of the coordinate system, the transformation of centered model coordinates into homogeneous coordinates, the construction of a viewing transformation matrix to operate on the coordinates, clipping invisible points, perspective transformation and scaling to screen coordinates; commands available include ZOOM, ROTATE, RESET, and CHANGEVIEW. Data file structure was chosen to minimize the amount of disk storage space. Despite the inherent slowness of the system, its low cost and flexibility suggests general applicability.

  16. A new approach to telemetry data processing. Ph.D. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Broglio, C. J.

    1973-01-01

    An approach for a preprocessing system for telemetry data processing was developed. The philosophy of the approach is the development of a preprocessing system to interface with the main processor and relieve it of the burden of stripping information from a telemetry data stream. To accomplish this task, a telemetry preprocessing language was developed. Also, a hardware device for implementing the operation of this language was designed using a cellular logic module concept. In the development of the hardware device and the cellular logic module, a distributed form of control was implemented. This is accomplished by a technique of one-to-one intermodule communications and a set of privileged communication operations. By transferring this control state from module to module, the control function is dispersed through the system. A compiler for translating the preprocessing language statements into an operations table for the hardware device was also developed. Finally, to complete the system design and verify it, a simulator for the collular logic module was written using the APL/360 system.

  17. Ada Compiler Validation Summary Report: Certificate Number: 940630W1. 11372 Rational Software Corporation VADS System V/88 Release 4, VAda-110-8484, Product Number: 2100-01464, Version 6.2 DG AViiON G70592-A (M88110) under UNIX System V Release 4

    DTIC Science & Technology

    1994-07-21

    InforMation Systems Agency, Center for Information Management DECLARATION OF CONFORMANCE The following declaration of conformance was supplied by the...services such as resource allocation, scheduling, inp•t/outp-it control, and data management. Usually, operating systems are predominantly software...Ada programming language. 1-4 CHAPTER 2 IMPLEMMTION DEPENDENC IES 2.1 WITHDRAWN TESTS The f3llowing tests have been withdrawn by the AVO. The

  18. Earth Observatory Satellite system definition study. Report no. 7: EOS system definition report

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The design concept and operational aspects of the Earth Observatory Satellite (EOS) are presented. A table of the planned EOS missions is included to show the purpose of the mission, the instruments involved, and the launch date. The subjects considered in the analysis of the EOS development are: (1) system requirements, (2) design/cost trade methodology, (3) observatory design alternatives, (4) the data management system, (5) the design evaluation and preferred approach, (6) program cost compilation, (7) follow-on mission accommodation, and (8) space shuttle interfaces and utilization. Illustrations and block diagrams of the spacecraft configurations are provided.

  19. Interpretive model for ''A Concurrency Method''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, C.L.

    1987-01-01

    This paper describes an interpreter for ''A Concurrency Method,'' in which concurrency is the inherent mode of operation and not an appendage to sequentiality. This method is based on the notions of data-drive and single-assignment while preserving a natural manner of programming. The interpreter is designed for and implemented on a network of Corvus Concept Personal Workstations, which are based on the Motorola MC68000 super-microcomputer. The interpreter utilizes the MC68000 processors in each workstation by communicating across OMNINET, the local area network designed for the workstations. The interpreter is a complete system, containing an editor, a compiler, an operating systemmore » with load balancer, and a communication facility. The system includes the basic arithmetic and trigonometric primitive operations for mathematical computations as well as the ability to construct more complex operations from these. 9 refs., 5 figs.« less

  20. Systems test facilities existing capabilities compilation

    NASA Technical Reports Server (NTRS)

    Weaver, R.

    1981-01-01

    Systems test facilities (STFS) to test total photovoltaic systems and their interfaces are described. The systems development (SD) plan is compilation of existing and planned STFs, as well as subsystem and key component testing facilities. It is recommended that the existing capabilities compilation is annually updated to provide and assessment of the STF activity and to disseminate STF capabilities, status and availability to the photovoltaics program.

  1. A Code Generation Approach for Auto-Vectorization in the Spade Compiler

    NASA Astrophysics Data System (ADS)

    Wang, Huayong; Andrade, Henrique; Gedik, Buğra; Wu, Kun-Lung

    We describe an auto-vectorization approach for the Spade stream processing programming language, comprising two ideas. First, we provide support for vectors as a primitive data type. Second, we provide a C++ library with architecture-specific implementations of a large number of pre-vectorized operations as the means to support language extensions. We evaluate our approach with several stream processing operators, contrasting Spade's auto-vectorization with the native auto-vectorization provided by the GNU gcc and Intel icc compilers.

  2. Geosoft eXecutables (GX's) Developed by the U.S. Geological Survey, Version 2.0, with Notes on GX Development from Fortran Code

    USGS Publications Warehouse

    Phillips, Jeffrey D.

    2007-01-01

    Introduction Geosoft executables (GX's) are custom software modules for use with the Geosoft Oasis montaj geophysical data processing system, which currently runs under the Microsoft Windows 2000 or XP operating systems. The U.S. Geological Survey (USGS) uses Oasis montaj primarily for the processing and display of airborne geophysical data. The ability to add custom software modules to the Oasis montaj system is a feature employed by the USGS in order to take advantage of the large number of geophysical algorithms developed by the USGS during the past half century. This main part of this report, along with Appendix 1, describes Version 2.0 GX's developed by the USGS or specifically for the USGS by contractors. These GX's perform both basic and advanced operations. Version 1.0 GX's developed by the USGS were described by Phillips and others (2003), and are included in Version 2.0. Appendix 1 contains the help files for the individual GX's. Appendix 2 describes the new method that was used to create the compiled GX files, starting from legacy Fortran source code. Although the new method shares many steps with the approach presented in the Geosoft GX Developer manual, it differs from that approach in that it uses free, open-source Fortran and C compilers and avoids all Fortran-to-C conversion.

  3. Copilot: Monitoring Embedded Systems

    NASA Technical Reports Server (NTRS)

    Pike, Lee; Wegmann, Nis; Niller, Sebastian; Goodloe, Alwyn

    2012-01-01

    Runtime verification (RV) is a natural fit for ultra-critical systems, where correctness is imperative. In ultra-critical systems, even if the software is fault-free, because of the inherent unreliability of commodity hardware and the adversity of operational environments, processing units (and their hosted software) are replicated, and fault-tolerant algorithms are used to compare the outputs. We investigate both software monitoring in distributed fault-tolerant systems, as well as implementing fault-tolerance mechanisms using RV techniques. We describe the Copilot language and compiler, specifically designed for generating monitors for distributed, hard real-time systems. We also describe two case-studies in which we generated Copilot monitors in avionics systems.

  4. Portuguese food composition database quality management system.

    PubMed

    Oliveira, L M; Castanheira, I P; Dantas, M A; Porto, A A; Calhau, M A

    2010-11-01

    The harmonisation of food composition databases (FCDB) has been a recognised need among users, producers and stakeholders of food composition data (FCD). To reach harmonisation of FCDBs among the national compiler partners, the European Food Information Resource (EuroFIR) Network of Excellence set up a series of guidelines and quality requirements, together with recommendations to implement quality management systems (QMS) in FCDBs. The Portuguese National Institute of Health (INSA) is the national FCDB compiler in Portugal and is also a EuroFIR partner. INSA's QMS complies with ISO/IEC (International Organization for Standardisation/International Electrotechnical Commission) 17025 requirements. The purpose of this work is to report on the strategy used and progress made for extending INSA's QMS to the Portuguese FCDB in alignment with EuroFIR guidelines. A stepwise approach was used to extend INSA's QMS to the Portuguese FCDB. The approach included selection of reference standards and guides and the collection of relevant quality documents directly or indirectly related to the compilation process; selection of the adequate quality requirements; assessment of adequacy and level of requirement implementation in the current INSA's QMS; implementation of the selected requirements; and EuroFIR's preassessment 'pilot' auditing. The strategy used to design and implement the extension of INSA's QMS to the Portuguese FCDB is reported in this paper. The QMS elements have been established by consensus. ISO/IEC 17025 management requirements (except 4.5) and 5.2 technical requirements, as well as all EuroFIR requirements (including technical guidelines, FCD compilation flowchart and standard operating procedures), have been selected for implementation. The results indicate that the quality management requirements of ISO/IEC 17025 in place in INSA fit the needs for document control, audits, contract review, non-conformity work and corrective actions, and users' (customers') comments, complaints and satisfaction, with minor adaptation. Implementation of the FCDB QMS proved to be a way of reducing the subjectivity of the compilation process and fully documenting it, and also facilitates training of new compilers. Furthermore, it has strengthened cooperation and trust among FCDB actors, as all of them were called to be involved in the process. On the basis of our practical results, we can conclude that ISO/IEC 17025 management requirements are an adequate reference for the implementation of INSA's FCDB QMS with the advantages of being well known to all members of staff and also being a common quality language among laboratories producing FCD. Combining quality systems and food composition activities endows the FCDB compilation process with flexibility, consistency and transparency, and facilitates its monitoring and assessment, providing the basis for strengthening confidence among users, data producers and compilers.

  5. Fast computation of close-coupling exchange integrals using polynomials in a tree representation

    NASA Astrophysics Data System (ADS)

    Wallerberger, Markus; Igenbergs, Katharina; Schweinzer, Josef; Aumayr, Friedrich

    2011-03-01

    The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion-atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method. For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems. Program summaryProgram title: TXINT Catalogue identifier: AEHS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 332 No. of bytes in distributed program, including test data, etc.: 157 086 Distribution format: tar.gz Programming language: Fortran 95 Computer: All with a Fortran 95 compiler Operating system: All with a Fortran 95 compiler RAM: Depends heavily on input, usually less than 100 MiB Classification: 16.10 Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model. Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials. Restrictions: We restrict ourselves to a defined collision system in the impact parameter model. Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code. Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program. GNU Fortran Compiler "gfortran" from version 4.3.0 GNU Fortran 95 Compiler "g95" from version 4.2.0 Intel Fortran Compiler "ifort" from version 11.0

  6. Flight evaluation results from the general-aviation advanced avionics system program

    NASA Technical Reports Server (NTRS)

    Callas, G. P.; Denery, D. G.; Hardy, G. H.; Nedell, B. F.

    1983-01-01

    A demonstration advanced avionics system (DAAS) for general-aviation aircraft was tested at NASA Ames Research Center to provide information required for the design of reliable, low-cost, advanced avionics systems which would make general-aviation operations safer and more practicable. Guest pilots flew a DAAS-equipped NASA Cessna 402-B aircraft to evaluate the usefulness of data busing, distributed microprocessors, and shared electronic displays, and to provide data on the DAAS pilot/system interface for the design of future integrated avionics systems. Evaluation results indicate that the DAAS hardware and functional capability meet the program objective. Most pilots felt that the DAAS representative of the way avionics systems would evolve and felt the added capability would improve the safety and practicability of general-aviation operations. Flight-evaluation results compiled from questionnaires are presented, the results of the debriefings are summarized. General conclusions of the flight evaluation are included.

  7. Lockheed Martin Skunk Works Single Stage to Orbit/Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Lockheed Martin Skunk Works has compiled an Annual Performance Report of the X-33/RLV Program. This report consists of individual reports from all industry team members, as well as NASA team centers. This portion of the report is comprised of a status report of Lockheed Martin's contribution to the program. The following is a summary of the Lockheed Martin Centers involved and work reviewed under their portion of the agreement: (1) Lockheed Martin Skunk Works - Vehicle Development, Operations Development, X-33 and RLV Systems Engineering, Manufacturing, Ground Operations, Reliability, Maintainability/Testability, Supportability, & Special Analysis Team, and X-33 Flight Assurance; (2) Lockheed Martin Technical Operations - Launch Support Systems, Ground Support Equipment, Flight Test Operations, and RLV Operations Development Support; (3) Lockheed Martin Space Operations - TAEM and A/L Guidance and Flight Control Design, Evaluation of Vehicle Configuration, TAEM and A/L Dispersion Analysis, Modeling and Simulations, Frequency Domain Analysis, Verification and Validation Activities, and Ancillary Support; (4) Lockheed Martin Astronautics-Denver - Systems Engineering, X-33 Development; (5) Sanders - A Lockheed Martin Company - Vehicle Health Management Subsystem Progress, GSS Progress; and (6) Lockheed Martin Michoud Space Systems - X-33 Liquid Oxygen (LOX) Tank, Key Challenges, Lessons Learned, X-33/RLV Composite Technology, Reusable Cyrogenic Insulation (RCI) and Vehicle Health Monitoring, Main Propulsion Systems (MPS), Structural Testing, X-33 System Integration and Analysis, and Cyrogenic Systems Operations.

  8. Mechanical systems: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation of several mechanized systems is presented. The articles are contained in three sections: robotics, industrial mechanical systems, including several on linear and rotary systems and lastly mechanical control systems, such as brakes and clutches.

  9. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  10. Compilation of Abstracts of Theses Submitted by Candidates for Degrees.

    DTIC Science & Technology

    1986-09-30

    Musitano, J.R. Fin-line Horn Antennas 118 LCDR, USNR Muth, L.R. VLSI Tutorials Through the 119 LT, USN Video -computer Courseware Implementation...Engineer Allocation 432 CPT, USA Model Kiziltan, M. Cognitive Performance Degrada- 433 LTJG, Turkish Navy tion on Sonar Operator and Tor- pedo Data...and Computer Engineering 118 VLSI TUTORIALS THROUGH THE VIDEO -COMPUTER COURSEWARE IMPLEMENTATION SYSTEM Liesel R. Muth Lieutenant, United States Navy

  11. Missing, Exploited and Runaway Youth: Strengthening the System. Hearing before the Subcommittee on Select Education of the Committee on Education and the Workforce. House of Representatives, One Hundred Eighth Congress, First Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Education and the Workforce.

    These hearings transcripts compile testimony regarding how programs authorized by the Runaway and Homeless Youth Act and the Missing Children's Assistance Act currently operate, in preparation for upcoming reauthorization. Opening statements by U.S. Representatives Peter Hoekstra (Michigan) and Ruben Hinojosa (Texas) underscore the obligation to…

  12. INFORMATION STORAGE AND RETRIEVAL, A STATE-OF-THE-ART REPORT

    DTIC Science & Technology

    The objective of the study was to compile relevant background and interpretive material and prepare a state-of-the-art report which would put the...to-person communications. Section III presents basic IS and R concepts and techniques. It traces the history of traditional librarianship through...the process of communication between the originators and users of information. Section V categorizes the information system operations required to

  13. Kelly during Twins Study Experiment Operations

    NASA Image and Video Library

    2015-09-24

    ISS045E028258 (09/24/2015) --- NASA astronaut Scott Kelly gives himself a flu shot for an ongoing study on the human immune system. The vaccination is part of NASA’s Twins Study, a compilation of multiple investigations that take advantage of a unique opportunity to study identical twin astronauts Scott and Mark Kelly, while Scott spends a year aboard the International Space Station and Mark remains on Earth.

  14. PM-1 NUCLEAR POWER PROGRAM. VOLUME II. PLANT PERFORMANCE STUDIES. Final Periodic Report, September 1, 1962 to December 31, 1962

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1963-04-01

    Data obtained during the performance testing of the PM-1 plant were compiled and evaluated. The plant powers an Air Defense Command radar station located at Sundance, Wyoming, and is required to supply extremely high-quality electrical power (minimum of frequency and voltage fluctuations) even during severe load transients. The data obtained were compiled into the following format: (1) operating requirements; (2) startup requirements; (3) plant as an energy source; (4) plant radiation levels and health physics; (5) plant instrumentation and control; (6) reactor characteristics; (7) primary system characteristics; (8) secondary system characteristics; and (9) malfunction reports. It was concluded from themore » data that the plant performance in general meets or exceeds specification. Transient and steady-state electrical fluctuations are well within specified limitations. Heat balance data for both the primary and secondary system agree reasonably well with design predictions. Radiation levels are below those anticipated. Coolant activity in the primary system is approximately at anticipated levels; secondary system coolant activity is negligible. The core life was re-estimated based on asbuilt core characteristics. A lifetime of 16.6 Mw-yr is predicted. (auth)« less

  15. Nuclear safety for the space exploration initiative

    NASA Technical Reports Server (NTRS)

    Dix, Terry E.

    1991-01-01

    The results of a study to identify potential hazards arising from nuclear reactor power systems for use on the lunar and Martian surfaces, related safety issues, and resolutions of such issues by system design changes, operating procedures, and other means are presented. All safety aspects of nuclear reactor power systems from prelaunch ground handling to eventual disposal were examined consistent with the level of detail for SP-100 reactor design at the 1988 System Design Review and for launch vehicle and space transport vehicle designs and mission descriptions as defined in the 90-day Space Exploration Initiative (SEI) study. Information from previous aerospace nuclear safety studies was used where appropriate. Safety requirements for the SP-100 space nuclear reactor system were compiled. Mission profiles were defined with emphasis on activities after low earth orbit insertion. Accident scenarios were then qualitatively defined for each mission phase. Safety issues were identified for all mission phases with the aid of simplified event trees. Safety issue resolution approaches of the SP-100 program were compiled. Resolution approaches for those safety issues not covered by the SP-100 program were identified. Additionally, the resolution approaches of the SP-100 program were examined in light of the moon and Mars missions.

  16. A three-dimensional application with the numerical grid generation code: EAGLE (utilizing an externally generated surface)

    NASA Technical Reports Server (NTRS)

    Houston, Johnny L.

    1990-01-01

    Program EAGLE (Eglin Arbitrary Geometry Implicit Euler) is a multiblock grid generation and steady-state flow solver system. This system combines a boundary conforming surface generation, a composite block structure grid generation scheme, and a multiblock implicit Euler flow solver algorithm. The three codes are intended to be used sequentially from the definition of the configuration under study to the flow solution about the configuration. EAGLE was specifically designed to aid in the analysis of both freestream and interference flow field configurations. These configurations can be comprised of single or multiple bodies ranging from simple axisymmetric airframes to complex aircraft shapes with external weapons. Each body can be arbitrarily shaped with or without multiple lifting surfaces. Program EAGLE is written to compile and execute efficiently on any CRAY machine with or without Solid State Disk (SSD) devices. Also, the code uses namelist inputs which are supported by all CRAY machines using the FORTRAN Compiler CF177. The use of namelist inputs makes it easier for the user to understand the inputs and to operate Program EAGLE. Recently, the Code was modified to operate on other computers, especially the Sun Spare4 Workstation. Several two-dimensional grid configurations were completely and successfully developed using EAGLE. Currently, EAGLE is being used for three-dimension grid applications.

  17. NACA Conference on Some Problems of Aircraft Operation: A Compilation of the Papers Presented

    NASA Technical Reports Server (NTRS)

    1950-01-01

    This volume contains copies of the technical papers presented at the NACA Conference on Some Problems of Aircraft Operation on October 9 and 10, 1950 at the Lewis Flight Propulsion Laboratory. This conference was attended by members of the aircraft industry and military services. The original presentation and this record are considered as complementary to, rather than as substitutes for, the Committee's system of complete and formal reports. A list of the conferees is included. [Contents include four subject areas: Atmospheric Turbulence and its Effect on Aircraft Operation; Some Aspects of Aircraft Safety - Icing, Ditching and Fire; Aerodynamic Considerations for High-Speed Transport Airplanes; Propulsion Considerations for High-Speed Transport Airplanes.

  18. Assessment of resident operative performance using a real-time mobile Web system: preparing for the milestone age.

    PubMed

    Wagner, Justin P; Chen, David C; Donahue, Timothy R; Quach, Chi; Hines, O Joe; Hiatt, Jonathan R; Tillou, Areti

    2014-01-01

    To satisfy trainees' operative competency requirements while improving feedback validity and timeliness using a mobile Web-based platform. The Southern Illinois University Operative Performance Rating Scale (OPRS) was embedded into a website formatted for mobile devices. From March 2013 to February 2014, faculty members were instructed to complete the OPRS form while providing verbal feedback to the operating resident at the conclusion of each procedure. Submitted data were compiled automatically within a secure Web-based spreadsheet. Conventional end-of-rotation performance (CERP) evaluations filed 2006 to 2013 and OPRS performance scores were compared by year of training using serial and independent-samples t tests. The mean CERP scores and OPRS overall resident operative performance scores were directly compared using a linear regression model. OPRS mobile site analytics were reviewed using a Web-based reporting program. Large university-based general surgery residency program. General Surgery faculty used the mobile Web OPRS system to rate resident performance. Residents and the program director reviewed evaluations semiannually. Over the study period, 18 faculty members and 37 residents logged 176 operations using the mobile OPRS system. There were 334 total OPRS website visits. Median time to complete an evaluation was 45 minutes from the end of the operation, and faculty spent an average of 134 seconds on the site to enter 1 assessment. In the 38,506 CERP evaluations reviewed, mean performance scores showed a positive linear trend of 2% change per year of training (p = 0.001). OPRS overall resident operative performance scores showed a significant linear (p = 0.001), quadratic (p = 0.001), and cubic (p = 0.003) trend of change per year of clinical training, reflecting the resident operative experience in our training program. Differences between postgraduate year-1 and postgraduate year-5 overall performance scores were greater with the OPRS (mean = 0.96, CI: 0.55-1.38) than with CERP measures (mean = 0.37, CI: 0.34-0.41). Additionally, there were consistent increases in each of the OPRS subcategories. In contrast to CERPs, the OPRS fully satisfies the Accreditation Council for Graduate Medical Education and American Board of Surgery operative assessment requirements. The mobile Web platform provides a convenient interface, broad accessibility, automatic data compilation, and compatibility with common database and statistical software. Our mobile OPRS system encourages candid feedback dialog and generates a comprehensive review of individual and group-wide operative proficiency in real time. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  19. Ada Compiler Validation Summary Report: Certificate Number: 890420W1. 10075 International Business Machines Corporation. IBM Development System, for the Ada Language CMS/MVS Ada Cross Compiler, Version 2.1.1 IBM 3083 Host and IBM 4381 Target

    DTIC Science & Technology

    1989-04-20

    International business Machines Corporati,:i IBM Development System for the Ada Language, CMS/MVS Ada Cross Compiler, Version 2.1.1, Wright-Patterson AFB, IBM...VALIDATION SUMMARY REPORT: Certificate Number: 890420W1.10075 International Business Machines Corporation IBM Development System for the Ada Language CMS...command scripts provided by International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default

  20. Operation of U.S. Geological Survey unmanned digital magnetic observatories

    USGS Publications Warehouse

    Wilson, L.R.

    1990-01-01

    The precision and continuity of data recorded by unmanned digital magnetic observatories depend on the type of data acquisition equipment used and operating procedures employed. Three generations of observatory systems used by the U.S. Geological Survey are described. A table listing the frequency of component failures in the current observatory system has been compiled for a 54-month period of operation. The cause of component failure was generally mechanical or due to lightning. The average percentage data loss per month for 13 observatories operating a combined total of 637 months was 9%. Frequency distributions of data loss intervals show the highest frequency of occurrence to be intervals of less than 1 h. Installation of the third generation system will begin in 1988. The configuration of the third generation observatory system will eliminate most of the mechanical problems, and its components should be less susceptible to lightning. A quasi-absolute coil-proton system will be added to obtain baseline control for component variation data twice daily. Observatory data, diagnostics, and magnetic activity indices will be collected at 12-min intervals via satellite at Golden, Colorado. An improvement in the quality and continuity of data obtained with the new system is expected. ?? 1990.

  1. State Laws Relating to Michigan Libraries. Reprinted from the Michigan Compiled Laws.

    ERIC Educational Resources Information Center

    Michigan Library, Lansing.

    Prepared by the state librarian of Michigan, this compilation of laws is intended to help librarians, government officials, and citizens familarize themselves with the many state statues that affect the operation and development of libraries in Michigan. The document includes excerpts of laws pertaining to public libraries, school libraries,…

  2. Safety and maintenance engineering: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented for the dissemination of information on technological developments which have potential utility outside the aerospace and nuclear communities. Safety of personnel engaged in the handling of hazardous materials and equipment, protection of equipment from fire, high wind, or careless handling by personnel, and techniques for the maintenance of operating equipment are reported.

  3. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  4. RPython high-level synthesis

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Linczuk, Maciej

    2016-09-01

    The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.

  5. As-built design specification for the digital derivation of daily and monthly data bases from synoptic observations of temperature and precipitation for the People's Republic of China

    NASA Technical Reports Server (NTRS)

    Jeun, B. H.; Barger, G. L.

    1977-01-01

    A data base of synoptic meteorological information was compiled for the People's Republic of China, as an integral part of the Large Area Crop Inventory Experiment. A system description is provided, including hardware and software specifications, computation algorithms and an evaluation of output validity. Operations are also outlined, with emphasis placed on least squares interpolation.

  6. The Katydid system for compiling KEE applications to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    Components of a system known as Katydid are developed in an effort to compile knowledge-based systems developed in a multimechanism integrated environment (KEE) to Ada. The Katydid core is an Ada library supporting KEE object functionality, and the other elements include a rule compiler, a LISP-to-Ada translator, and a knowledge-base dumper. Katydid employs translation mechanisms that convert LISP knowledge structures and rules to Ada and utilizes basic prototypes of a run-time KEE object-structure library module for Ada. Preliminary results include the semiautomatic compilation of portions of a simple expert system to run in an Ada environment with the described algorithms. It is suggested that Ada can be employed for AI programming and implementation, and the Katydid system is being developed to include concurrency and synchronization mechanisms.

  7. Key technology research of HILS based on real-time operating system

    NASA Astrophysics Data System (ADS)

    Wang, Fankai; Lu, Huiming; Liu, Che

    2018-03-01

    In order to solve the problems that the long development cycle of traditional simulation and digital simulation doesn't have the characteristics of real time, this paper designed a HILS(Hardware In the Loop Simulation) system based on the real-time operating platform xPC. This system solved the communication problems between HMI and Simulink models through the MATLAB engine interface, and realized the functions of system setting, offline simulation, model compiling and downloading, etc. Using xPC application interface and integrating the TeeChart ActiveX chart component to realize the monitoring function of real-time target application; Each functional block in the system is encapsulated in the form of DLL, and the data interaction between modules was realized by MySQL database technology. When the HILS system runs, search the address of the online xPC target by means of the Ping command, to establish the Tcp/IP communication between the two machines. The technical effectiveness of the developed system is verified through the typical power station control system.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grafe, J.L.

    During the past decade many changes have taken place in the natural gas industry, not the least of which is the way information (data) is acquired, moved, compiled, integrated and disseminated within organizations. At El Paso Natural Gas Company (EPNG) the Operations Control Department has been at the center of these changes. The Systems Section within Operations Control has been instrumental in developing the computer programs that acquire and store real-time operational data, and then make it available to not only the Gas Control function, but also to anyone else within the company who might require it and, to amore » limited degree, any supplier or purchaser of gas utilizing the El Paso pipeline. These computer programs which make up the VISA system are, in effect, the tools that help move the data that flows in the pipeline of information within the company. Their integration into this pipeline process is the topic of this paper.« less

  9. Ada Compiler Validation Summary Report: Certificate Number: 890420W1. 10074 International Business Machines Corporation, IBM Development System for the Ada Language MVS Ada Compiler, Version 2.1.1 IBM 4381 (Host and Target)

    DTIC Science & Technology

    1989-04-20

    20. ARS1AAI . (Contimne on reverse side olnetessary *rwenPtif) by bfoci nur~be’) International Business Machines Corporation, IBM Development System...Number: AVF-VSR-261.0789 89-01-26-TEL Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 890420W1.10074 International Business Machines...computer. The compiler was tested using command scripts provided by International Business Machines Corporation and reviewed by the validation team. The

  10. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    NASA Astrophysics Data System (ADS)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  11. Verified compilation of Concurrent Managed Languages

    DTIC Science & Technology

    2017-11-01

    designs for compiler intermediate representations that facilitate mechanized proofs and verification; and (d) a realistic case study that combines these...ideas to prove the correctness of a state-of- the-art concurrent garbage collector. 15. SUBJECT TERMS Program verification, compiler design ...Even though concurrency is a pervasive part of modern software and hardware systems, it has often been ignored in safety-critical system designs . A

  12. Scoops3D: software to analyze 3D slope stability throughout a digital landscape

    USGS Publications Warehouse

    Reid, Mark E.; Christian, Sarah B.; Brien, Dianne L.; Henderson, Scott T.

    2015-01-01

    The computer program, Scoops3D, evaluates slope stability throughout a digital landscape represented by a digital elevation model (DEM). The program uses a three-dimensional (3D) method of columns approach to assess the stability of many (typically millions) potential landslides within a user-defined size range. For each potential landslide (or failure), Scoops3D assesses the stability of a rotational, spherical slip surface encompassing many DEM cells using a 3D version of either Bishop’s simplified method or the Ordinary (Fellenius) method of limit-equilibrium analysis. Scoops3D has several options for the user to systematically and efficiently search throughout an entire DEM, thereby incorporating the effects of complex surface topography. In a thorough search, each DEM cell is included in multiple potential failures, and Scoops3D records the lowest stability (factor of safety) for each DEM cell, as well as the size (volume or area) associated with each of these potential landslides. It also determines the least-stable potential failure for the entire DEM. The user has a variety of options for building a 3D domain, including layers or full 3D distributions of strength and pore-water pressures, simplistic earthquake loading, and unsaturated suction conditions. Results from Scoops3D can be readily incorporated into a geographic information system (GIS) or other visualization software. This manual includes information on the theoretical basis for the slope-stability analysis, requirements for constructing and searching a 3D domain, a detailed operational guide (including step-by-step instructions for using the graphical user interface [GUI] software, Scoops3D-i) and input/output file specifications, practical considerations for conducting an analysis, results of verification tests, and multiple examples illustrating the capabilities of Scoops3D. Easy-to-use software installation packages are available for the Windows or Macintosh operating systems; these packages install the compiled Scoops3D program, the GUI (Scoops3D-i), and associated documentation. Several Scoops3D examples, including all input and output files, are available as well. The source code is written in the Fortran 90 language and can be compiled to run on any computer operating system with an appropriate compiler.

  13. Ada/POSIX binding: A focused Ada investigation

    NASA Technical Reports Server (NTRS)

    Legrand, Sue

    1988-01-01

    NASA is seeking an operating system interface definition (OSID) for the Space Station Program (SSP) in order to take advantage of the commercial off-the-shelf (COTS) products available today and the many that are expected in the future. NASA would also like to avoid the reliance on any one source for operating systems, information system, communication system, or instruction set architecture. The use of the Portable Operating System Interface for Computer Environments (POSIX) is examined as a possible solution to this problem. Since Ada is already the language of choice for SSP, the question of an Ada/POSIX binding is addressed. The intent of the binding is to provide access to the POSIX standard operation system (OS) interface and environment, by which application portability of Ada applications will be supported at the source code level. A guiding principle of Ada/POSIX binding development is a clear conformance of the Ada interface with the functional definition of POSIX. The interface is intended to be used by both application developers and system implementors. The objective is to provide a standard that allows a strictly conforming application source program that can be compiled to execute on any conforming implementation. Special emphasis is placed on first providing those functions and facilities that are needed in a wide variety of commercial applications

  14. Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D J; Willcock, J J

    2008-12-12

    Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high-level abstractions, such as STL containers and complex user-defined types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructuremore » which preserves the high-level abstractions and gives us access to their semantics. Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Those kernels include an array-base computation loop, a loop with task-level parallelism, and a domain-specific tree traversal. Our work extends the applicability of automatic parallelization to modern applications using high-level abstractions and exposes more opportunities to take advantage of multicore processors.« less

  15. LASL benchmark performance 1978. [CDC STAR-100, 6600, 7600, Cyber 73, and CRAY-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKnight, A.L.

    1979-08-01

    This report presents the results of running several benchmark programs on a CDC STAR-100, a Cray Research CRAY-1, a CDC 6600, a CDC 7600, and a CDC Cyber 73. The benchmark effort included CRAY-1's at several installations running different operating systems and compilers. This benchmark is part of an ongoing program at Los Alamos Scientific Laboratory to collect performance data and monitor the development trend of supercomputers. 3 tables.

  16. Compilation of 1993 Annual Reports of the Navy ELF Communications System Ecological Monitoring Program

    DTIC Science & Technology

    1994-04-01

    variation in non-treatment factors that may affect growth or health such as soil, stand conditions and background and treatment EM field levels. The time...diameter growth residuals were much greater than expected given existing climatic conditions . In 1992, when the antenna returned to full power operation...growing seasons. If an enviromental factor which is not accounted for in the growth model significantly impacts seasonal height growth , then the observed

  17. ADA Implementation Issues as Discovered through a Literature Survey of Applications Outside the United States

    DTIC Science & Technology

    1992-03-01

    compile time, ensuring that operations conducted are appropriate for the object type. Each implementation requires a database known as the program...Finnish bank being developed by Nokia • Oil drilling control system managed by Sedco- Forex * Vigile - an industrial installation supervisor project by...user interface and Oracle database backend control. The software is being developed in Ada under DOD-STD-2167 under OS/2. BELGIUM BATS S.A. Project title

  18. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.

  19. NASA GRC UAS Project - Communications Modeling and Simulation Development Status

    NASA Technical Reports Server (NTRS)

    Apaza, Rafael; Bretmersky, Steven; Dailey, Justin; Satapathy, Goutam; Ditzenberger, David; Ye, Chris; Kubat, Greg; Chevalier, Christine; Nguyen, Thanh

    2014-01-01

    The integration of Unmanned Aircraft Systems (UAS) in the National Airspace represents new operational concepts required in civil aviation. These new concepts are evolving as the nation moves toward the Next Generation Air Transportation System (NextGen) under the leadership of the Joint Planning and Development Office (JPDO), and through ongoing work by the Federal Aviation Administration (FAA). The desire and ability to fly UAS in the National Air Space (NAS) in the near term has increased dramatically, and this multi-agency effort to develop and implement a national plan to successfully address the challenges of UAS access to the NAS in a safe and timely manner is well underway. As part of the effort to integrate UAS in the National Airspace, NASA Glenn Research Center is currently involved with providing research into Communications systems and Communication system operations in order to assist with developing requirements for this implementation. In order to provide data and information regarding communication systems performance that will be necessary, NASA GRC is tasked with developing and executing plans for simulations of candidate future UAS command and control communications, in line with architectures and communications technologies being developed and or proposed by NASA and relevant aviation organizations (in particular, RTCA SC-203). The simulations and related analyses will provide insight into the ability of proposed communications technologies and system architectures to enable safe operation of UAS, meeting UAS in the NAS project goals (including performance requirements, scalability, and interoperability), and ultimately leading to a determination of the ability of NextGen communication systems to accommodate UAS. This presentation, compiled by the NASA GRC Modeling and Simulation team, will provide an update to this ongoing effort at NASA GRC as follow-up to the overview of the planned simulation effort presented at ICNS in 2013. The objective of presentation will be to describe the progress made in developing both a NAS-Wide simulation architecture application and the detailed radiocomm system models for this research, and will present interim data and information compiled in the process of developing these simulation capabilities to date.

  20. Remotely Operated Aircraft (ROA) Impact on the National Airspace System (NAS) Work Package, 2005: Composite Report on FAA Flight Plan and Operational Evaluation Plan. Version 7.0

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The purpose of this document is to present the findings that resulted from a high-level analysis and evaluation of the following documents: (1) The OEP (Operational Evolution Plan) Version 7 -- a 10-year plan for operational improvements to increase capacity and efficiency in U.S. air travel and transport and other use of domestic airspace. The OEP is the FAA commitment to operational improvements. It is outcome driven, with clear lines of accountability within FAA organizations. The OEP concentrates on operational solutions and integrates safety, certification, procedures, staffing, equipment, avionics and research; (2) The Draft Flight Plan 2006 through 2010 -- a multi-year strategic effort, setting a course for the FAA through 2001, to provide the safest and most efficient air transportation system in the world; (3) The NAS System Architecture Version 5 -- a blueprint for modernizing the NAS and improving NAS services and capabilities through the year 2015; and (4) The NAS-SR-1000 System Requirements Specification (NASSRS) -- a compilation of requirements which describe the operational capabilities for the NAS. The analysis is particularly focused on examining the documents for relevance to existing and/or planned future UAV operations. The evaluation specifically focuses on potential factors that could materially affect the development of a commercial ROA industry, such as: (1) Design limitations of the CNS/ATM system, (2) Human limitations, The information presented was taken from program specifications or program office lead personnel.

  1. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  2. Electronic control circuits: A compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A compilation of technical R and D information on circuits and modular subassemblies is presented as a part of a technology utilization program. Fundamental design principles and applications are given. Electronic control circuits discussed include: anti-noise circuit; ground protection device for bioinstrumentation; temperature compensation for operational amplifiers; hybrid gatling capacitor; automatic signal range control; integrated clock-switching control; and precision voltage tolerance detector.

  3. Electronic circuits for communications systems: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The compilation of electronic circuits for communications systems is divided into thirteen basic categories, each representing an area of circuit design and application. The compilation items are moderately complex and, as such, would appeal to the applications engineer. However, the rationale for the selection criteria was tailored so that the circuits would reflect fundamental design principles and applications, with an additional requirement for simplicity whenever possible.

  4. Compilation of 1987 Annual Reports of the Navy ELF (Extremely Low Frequency) Communications System Ecological Monitoring Program. Volume 2

    DTIC Science & Technology

    1988-08-01

    such as those in the vicinity of the ELF antenna because they are pollinators of flowering plants , and are therefore important to the reproductive...COPY r- Compilation of 1987 Annual Reports o of the Navy ELF Communications System C4 Ecological Monitoring Program Volume 2 of 3 Volumes: TABS D -G...Security Classification) Compilation of 1987 Annual Reports of the Navy ELF Communications System Ecological Monitoring Program (Volume 2 of 3 Volumes

  5. Ada (Tradename) Compiler Validation Summary Report. International Business Machines Corporation. IBM Development System for the Ada Language for MVS, Version 1.0. IBM 4381 (IBM System/370) under MVS.

    DTIC Science & Technology

    1986-05-05

    AVF-VSR-36.0187 Ada" COMPILER VALIDATION SUMMARY REPORT: International Business Machines Corporation IBM Development System for the Ada Language for...withdrawn from ACVC Version 1.7 were not run. The compiler was tested using command scripts provided by International Business Machines Corporation. These...APPENDIX A COMPLIANCE STATEMENT International Business Machines Corporation has submitted the following compliance statement concerning the IBM

  6. The early indicators of financial failure: a study of bankrupt and solvent health systems.

    PubMed

    Coyne, Joseph S; Singh, Sher G

    2008-01-01

    This article presents a series of pertinent predictors of financial failure based on analysis of solvent and bankrupt health systems to identify which financial measures show the clearest distinction between success and failure. Early warning signals are evident from the longitudinal analysis as early as five years before bankruptcy. The data source includes seven years of annual statements filed with the Securities and Exchange Commission by 13 health systems before they filed bankruptcy. Comparative data were compiled from five solvent health systems for the same seven-year period. Seven financial solvency ratios are included in this study, including four cash liquidity measures, two leverage measures, and one efficiency measure. The results show distinct financial trends between solvent and bankrupt health systems, in particular for the operating-cash-flow-related measures, namely Ratio 1: Operating Cash Flow Percentage Change, from prior to current period; Ratio 2: Operating Cash Flow to Net Revenues; and Ratio 4: Cash Flow to Total Liabilities, indicating sensitivity in the hospital industry to cash flow management. The high dependence on credit from third-party payers is cited as a reason for this; thus, there is a great need for cash to fund operations. Five managerial policy implications are provided to help health system managers avoid financial solvency problems in the future.

  7. SCHOOL PLANT MANAGEMENT FOR SCHOOL ADMINISTRATORS.

    ERIC Educational Resources Information Center

    ENGMAN, JOHN DAVID

    THIS REPORT IS A COMPILATION OF STUDIES ON SIGNIFICANT ASPECTS IN SCHOOL PLANNING AND OPERATION. A RELATIONSHIP IS SHOWN BETWEEN CURRICULUM, PERSONNEL AND AUXILIARY SERVICES IN EDUCATIONAL PROGRAM OPERATIONS. THE REPORT INCLUDES PLANNING, MANAGEMENT AND OPERATION OF SUCH AREAS AS--NONINSTRUCTIONAL PERSONNEL POLICIES, CUSTODIAL SERVICES,…

  8. Purple L1 Milestone Review Panel TotalView Debugger Functionality and Performance for ASC Purple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, M

    2006-12-12

    ASC code teams require a robust software debugging tool to help developers quickly find bugs in their codes and get their codes running. Development debugging commonly runs up to 512 processes. Production jobs run up to full ASC Purple scale, and at times require introspection while running. Developers want a debugger that runs on all their development and production platforms and that works with all compilers and runtimes used with ASC codes. The TotalView Multiprocess Debugger made by Etnus was specified for ASC Purple to address this needed capability. The ASC Purple environment builds on the environment seen by TotalViewmore » on ASCI White. The debugger must now operate with the Power5 CPU, Federation switch, AIX 5.3 operating system including large pages, IBM compilers 7 and 9, POE 4.2 parallel environment, and rs6000 SLURM resource manager. Users require robust, basic debugger functionality with acceptable performance at development debugging scale. A TotalView installation must be provided at the beginning of the early user access period that meets these requirements. A functional enhancement, fast conditional data watchpoints, and a scalability enhancement, capability up to 8192 processes, are to be demonstrated.« less

  9. Proceedings of the NASA Conference on Space Telerobotics, volume 5

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor); Seraji, Homayoun (Editor)

    1989-01-01

    Papers presented at the NASA Conference on Space Telerobotics are compiled. The theme of the conference was man-machine collaboration in space. The conference provided a forum for researchers and engineers to exchange ideas on the research and development required for the application of telerobotics technology to the space systems planned for the 1990's and beyond. Volume 5 contains papers related to the following subject areas: robot arm modeling and control, special topics in telerobotics, telerobotic space operations, manipulator control, flight experiment concepts, manipulator coordination, issues in artificial intelligence systems, and research activities at the Johnson Space Center.

  10. A Macro Analysis of DoD Logistics Systems. Volume 2. Structure and Analysis of the Air Force Logistics System

    DTIC Science & Technology

    1977-09-01

    performance measures discussed earlier is the "Engine Actuarial Data Summary" (EADS) (AFLC Form 992), compiled from D024F actuarial data. EADS is...3.2,452.3 PlyLWn M"Is, 404.670 432.603 408,553. 413.940 PMiSMA-00023 AF asm -haiw pua-flywa hma tread =100% PTtU94411 "Weul ’A101) 16,839.6 1.6,308.0...Engine Actuarial Data Summary. ENORS - 2ngine Not Operationally Ready, Supply. EOQ Items - Economic Order Quantity Items; i.e., expense-type items, not

  11. SABrE User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.A.

    In computing landscape which has a plethora of different hardware architectures and supporting software systems ranging from compilers to operating systems, there is an obvious and strong need for a philosophy of software development that lends itself to the design and construction of portable code systems. The current efforts to standardize software bear witness to this need. SABrE is an effort to implement a software development environment which is itself portable and promotes the design and construction of portable applications. SABrE does not include such important tools as editors and compilers. Well built tools of that kind are readily availablemore » across virtually all computer platforms. The areas that SABrE addresses are at a higher level involving issues such as data portability, portable inter-process communication, and graphics. These blocks of functionality have particular significance to the kind of code development done at LLNL. That is partly why the general computing community has not supplied us with these tools already. This is another key feature of the software development environments which we must recognize. The general computing community cannot and should not be expected to produce all of the tools which we require.« less

  12. SABrE User`s Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.A.

    In computing landscape which has a plethora of different hardware architectures and supporting software systems ranging from compilers to operating systems, there is an obvious and strong need for a philosophy of software development that lends itself to the design and construction of portable code systems. The current efforts to standardize software bear witness to this need. SABrE is an effort to implement a software development environment which is itself portable and promotes the design and construction of portable applications. SABrE does not include such important tools as editors and compilers. Well built tools of that kind are readily availablemore » across virtually all computer platforms. The areas that SABrE addresses are at a higher level involving issues such as data portability, portable inter-process communication, and graphics. These blocks of functionality have particular significance to the kind of code development done at LLNL. That is partly why the general computing community has not supplied us with these tools already. This is another key feature of the software development environments which we must recognize. The general computing community cannot and should not be expected to produce all of the tools which we require.« less

  13. Reproducibility of neuroimaging analyses across operating systems

    PubMed Central

    Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757

  14. Reproducibility of neuroimaging analyses across operating systems.

    PubMed

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  15. Compiler-assisted static checkpoint insertion

    NASA Technical Reports Server (NTRS)

    Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.

    1992-01-01

    This paper describes a compiler-assisted approach for static checkpoint insertion. Instead of fixing the checkpoint location before program execution, a compiler enhanced polling mechanism is utilized to maintain both the desired checkpoint intervals and reproducible checkpoint 1ocations. The technique has been implemented in a GNU CC compiler for Sun 3 and Sun 4 (Sparc) processors. Experiments demonstrate that the approach provides for stable checkpoint intervals and reproducible checkpoint placements with performance overhead comparable to a previously presented compiler assisted dynamic scheme (CATCH) utilizing the system clock.

  16. Ada (Tradename) Compiler Validation Summary Report. Harris Corporation. Harris Ada Compiler, Version 1.0. Harris HCX-7.

    DTIC Science & Technology

    1986-06-12

    owp-fts 677 RDA (TRRDENE) COMPILER VALIDATION SUMAY REPORT III HARRIS CORPORATION MAR.. (U) INFORMATION SYSTEMS AM TECHNOLOGY CENTER N-P AFI OM ADA...Subtitle) 5. TYPE OF REPORT & PERIOD COVERED Ada Compiler Validation Summary Report : 12 .UN 1986 to 12 JUN1 1987 Harris Corporation, Harris Ada Compiler...Version 1.0, Harris HCX-7 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(s) 8. CONTRACT OR GRANT NUMBERs) Wright-Patterson 9. PERFORMING ORGANIZATION AND

  17. MULTIPROCESSOR AND DISTRIBUTED PROCESSING BIBLIOGRAPHIC DATA BASE SOFTWARE SYSTEM

    NASA Technical Reports Server (NTRS)

    Miya, E. N.

    1994-01-01

    Multiprocessors and distributed processing are undergoing increased scientific scrutiny for many reasons. It is more and more difficult to keep track of the existing research in these fields. This package consists of a large machine-readable bibliographic data base which, in addition to the usual keyword searches, can be used for producing citations, indexes, and cross-references. The data base is compiled from smaller existing multiprocessing bibliographies, and tables of contents from journals and significant conferences. There are approximately 4,000 entries covering topics such as parallel and vector processing, networks, supercomputers, fault-tolerant computers, and cellular automata. Each entry is represented by 21 fields including keywords, author, referencing book or journal title, volume and page number, and date and city of publication. The data base contains UNIX 'refer' formatted ASCII data and can be implemented on any computer running under the UNIX operating system. The data base requires approximately one megabyte of secondary storage. The documentation for this program is included with the distribution tape, although it can be purchased for the price below. This bibliography was compiled in 1985 and updated in 1988.

  18. Utilities for master source code distribution: MAX and Friends

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.

  19. SSR_pipeline--computer software for the identification of microsatellite sequences from paired-end Illumina high-throughput DNA sequence data

    USGS Publications Warehouse

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.

  20. Traffic safety facts 1997 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    1998-11-01

    In this annual report, Traffic Safety Facts 1997: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...

  1. Traffic safety facts 2007 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2007-01-01

    In this annual report, Traffic Safety Facts 2007: A Compilation of Motor Vehicle Crash Data from the Fatality : Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration : (NHTSA) presents descript...

  2. Traffic safety facts 2008 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2008-01-01

    In this annual report, Traffic Safety Facts 2008: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...

  3. Traffic safety facts 2009 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2009-01-01

    In this annual report, Traffic Safety Facts 2009: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...

  4. Effective Vectorization with OpenMP 4.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham

    This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less

  5. The LHEA PDP 11/70 graphics processing facility users guide

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A compilation of all necessary and useful information needed to allow the inexperienced user to program on the PDP 11/70. Information regarding the use of editing and file manipulation utilities as well as operational procedures are included. The inexperienced user is taken through the process of creating, editing, compiling, task building and debugging his/her FORTRAN program. Also, documentation on additional software is included.

  6. The Impact of IEEE-1076 on VHDL (Hardware Description Language)

    DTIC Science & Technology

    1988-12-01

    Portability simply refers to how machine-independent the language is defined. The Efficiency criterion looks at how fast a program compiles and how ...if a programming language is good. The evaluation was done to determine if IEEE Standard 1076-1987 was indeed a better version of VHDL than its...must be UNIX-based and be able to * use the tools that are common to that operating system such as lex [Lesk78I, yacc [John78] and the programming

  7. Energy Conversion Alternatives Study (ECAS), Westinghouse phase 1. Volume 3: Combustors, furnaces and low-BTU gasifiers. [used in coal gasification and coal liquefaction (equipment specifications)

    NASA Technical Reports Server (NTRS)

    Hamm, J. R.

    1976-01-01

    Information is presented on the design, performance, operating characteristics, cost, and development status of coal preparation equipment, combustion equipment, furnaces, low-Btu gasification processes, low-temperature carbonization processes, desulfurization processes, and pollution particulate removal equipment. The information was compiled for use by the various cycle concept leaders in determining the performance, capital costs, energy costs, and natural resource requirements of each of their system configurations.

  8. csa2sac—A program for computing discharge from continuous slope-area stage data

    USGS Publications Warehouse

    Wiele, Stephen M.

    2015-12-17

    In addition to csa2sac, the SAC7 program is required. It is the same as the original SAC program, except that it is compiled for 64-bit Windows operating systems and has a slightly different command line input. It is available online (http://water.usgs.gov/software/SAC/) as part of the SACGUI installation program. The program name, “SAC7.exe,” is coded into csa2sac, and must not be changed.

  9. Skylab checkout operations. [from multiple docking adapter contractor viewpoint

    NASA Technical Reports Server (NTRS)

    Timmons, K. P.

    1973-01-01

    The Skylab Program at Kennedy Space Center presented many opportunities for interesting and profound test and checkout experience. It also offered a compilation of challenges and promises for the Center and for the contractors responsible for the various modules making up Skylab. It is very probable that the various contractors had common experiences during the module and combined systems tests, but this paper will discuss those experiences from the viewpoint of the Multiple Docking Adapter contractor. The experience will consider personnel, procedures, and hardware.

  10. Refining, revising, augmenting, compiling and developing computer assisted instruction K-12 aerospace materials for implementation in NASA spacelink electronic information system

    NASA Technical Reports Server (NTRS)

    Blake, Jean A.

    1988-01-01

    The NASA Spacelink is an electronic information service operated by the Marshall Space Flight Center. The Spacelink contains extensive NASA news and educational resources that can be accessed by a computer and modem. Updates and information are provided on: current NASA news; aeronautics; space exploration: before the Shuttle; space exploration: the Shuttle and beyond; NASA installations; NASA educational services; materials for classroom use; and space program spinoffs.

  11. Space Tug avionics definition study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A top down approach was used to identify, compile, and develop avionics functional requirements for all flight and ground operational phases. Such requirements as safety mission critical functions and criteria, minimum redundancy levels, software memory sizing, power for tug and payload, data transfer between payload, tug, shuttle, and ground were established. Those functional requirements that related to avionics support of a particular function were compiled together under that support function heading. This unique approach provided both organizational efficiency and traceability back to the applicable operational phase and event. Each functional requirement was then allocated to the appropriate subsystems and its particular characteristics were quantified.

  12. Modular implementation of a digital hardware design automation system

    NASA Astrophysics Data System (ADS)

    Masud, M.

    An automation system based on AHPL (A Hardware Programming Language) was developed. The project may be divided into three distinct phases: (1) Upgrading of AHPL to make it more universally applicable; (2) Implementation of a compiler for the language; and (3) illustration of how the compiler may be used to support several phases of design activities. Several new features were added to AHPL. These include: application-dependent parameters, mutliple clocks, asynchronous results, functional registers and primitive functions. The new language, called Universal AHPL, has been defined rigorously. The compiler design is modular. The parsing is done by an automatic parser generated from the SLR(1)BNF grammar of the language. The compiler produces two data bases from the AHPL description of a circuit. The first one is a tabular representation of the circuit, and the second one is a detailed interconnection linked list. The two data bases provide a means to interface the compiler to application-dependent CAD systems.

  13. Bearing tester data compilation, analysis and reporting and bearing math modeling, volume 1

    NASA Technical Reports Server (NTRS)

    Marshall, D. D.; Montgomery, E. E.; New, L. S.; Stone, M. A.; Tiller, B. K.

    1984-01-01

    Thermal and mechanical models of high speed angular contact ball bearings operating in LOX and LN2 were developed and verified with limited test data in an effort to further understand the parameters that determine or effect the SSME turbopump bearing operational characteristics and service life. The SHABERTH bearing analysis program which was adapted to evaluate shaft bearing systems in cryogenics is not capable of accommodating varying thermal properties and two phase flow. A bearing model with this capability was developed using the SINDA thermal analyzer. Iteration between the SHABERTH and the SINDA models enable the establishment of preliminary bounds for stable operation in LN2. These limits were established in terms of fluid flow, fluid inlet temperature, and axial load for a shaft speed of 30,000 RPM.

  14. Reactor shutdown experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cletcher, J.W.

    1995-10-01

    This is a regular report of summary statistics relating to recent reactor shutdown experience. The information includes both number of events and rates of occurence. It was compiled from data about operating events that were entered into the SCSS data system by the Nuclear Operations Analysis Center at the Oak ridge National Laboratory and covers the six mont period of July 1 to December 31, 1994. Cumulative information, starting from May 1, 1994, is also reported. Updates on shutdown events included in earlier reports is excluded. Information on shutdowns as a function of reactor power at the time of themore » shutdown for both BWR and PWR reactors is given. Data is also discerned by shutdown type and reactor age.« less

  15. Controlling Laboratory Processes From A Personal Computer

    NASA Technical Reports Server (NTRS)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  16. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  17. Compiling knowledge-based systems from KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  18. GASP-PL/I Simulation of Integrated Avionic System Processor Architectures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brent, G. A.

    1978-01-01

    A development study sponsored by NASA was completed in July 1977 which proposed a complete integration of all aircraft instrumentation into a single modular system. Instead of using the current single-function aircraft instruments, computers compiled and displayed inflight information for the pilot. A processor architecture called the Team Architecture was proposed. This is a hardware/software approach to high-reliability computer systems. A follow-up study of the proposed Team Architecture is reported. GASP-PL/1 simulation models are used to evaluate the operating characteristics of the Team Architecture. The problem, model development, simulation programs and results at length are presented. Also included are program input formats, outputs and listings.

  19. LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER

    NASA Technical Reports Server (NTRS)

    Will, H.

    1994-01-01

    The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.

  20. Development of a measurement and control system for a 10 kW@20 K refrigerator based on Siemens PLC S7-300

    NASA Astrophysics Data System (ADS)

    Li, J.; Liu, L. Q.; Liu, T.; Xu, X. D.; Dong, B.; Lu, W. H.; Pan, W.; Wu, J. H.; Xiong, L. Y.

    2017-02-01

    A 10 kW@20 K refrigerator has been established by the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. A measurement and control system based on Siemens PLC S7-300 for this 10 kW@20 K refrigerator is developed. According to the detailed measurement requirements, proper sensors and transmitters are adopted. Siemens S7-300 PLC CPU315-2 PN/DP operates as a master station. Two sets of ET200M DP remote expand I/O, one power meter, two compressors and one vacuum gauge operate as slave stations. Profibus-DP field communication and Modbus communication are used between the master station and the slave stations in this control system. The upper computer HMI (Human Machine Interface) is compiled using Siemens configuration software WinCC V7.0. The upper computer communicates with PLC by means of industrial Ethernet. After commissioning, this refrigerator has been operating with a 10 kW of cooling power at 20 K for more than 72 hours.

  1. A parallel data management system for large-scale NASA datasets

    NASA Technical Reports Server (NTRS)

    Srivastava, Jaideep

    1993-01-01

    The past decade has experienced a phenomenal growth in the amount of data and resultant information generated by NASA's operations and research projects. A key application is the reprocessing problem which has been identified to require data management capabilities beyond those available today (PRAT93). The Intelligent Information Fusion (IIF) system (ROEL91) is an ongoing NASA project which has similar requirements. Deriving our understanding of NASA's future data management needs based on the above, this paper describes an approach to using parallel computer systems (processor and I/O architectures) to develop an efficient parallel database management system to address the needs. Specifically, we propose to investigate issues in low-level record organizations and management, complex query processing, and query compilation and scheduling.

  2. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  3. Experiences with hypercube operating system instrumentation

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Rudolph, David C.

    1989-01-01

    The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.

  4. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  5. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  6. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  7. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  8. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records...

  9. Integrated Software Health Management for Aircraft GN and C

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Mengshoel, Ole

    2011-01-01

    Modern aircraft rely heavily on dependable operation of many safety-critical software components. Despite careful design, verification and validation (V&V), on-board software can fail with disastrous consequences if it encounters problematic software/hardware interaction or must operate in an unexpected environment. We are using a Bayesian approach to monitor the software and its behavior during operation and provide up-to-date information about the health of the software and its components. The powerful reasoning mechanism provided by our model-based Bayesian approach makes reliable diagnosis of the root causes possible and minimizes the number of false alarms. Compilation of the Bayesian model into compact arithmetic circuits makes SWHM feasible even on platforms with limited CPU power. We show initial results of SWHM on a small simulator of an embedded aircraft software system, where software and sensor faults can be injected.

  10. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  11. Global emissions of PM10 and PM2.5 from agricultural tillage and harvesting operations

    NASA Astrophysics Data System (ADS)

    Chen, W.; Tong, D.; Lee, P.

    2014-12-01

    Soil particles emitted during agricultural activities is a major recurring source contributing to atmospheric aerosol loading. Emission inventories of agricultural dust emissions have been compiled in several regions. These inventories, compiled based on historic survey and activity data, may reflect the current emission strengths that introduce large uncertainties when they are used to drive chemical transport models. In addition, there is no global emission inventory of agricultural dust emissions required to support global air quality and climate modeling. In this study, we present our recent efforts to develop a global emission inventory of PM10 and PM2.5 released from field tillage and harvesting operations using an emission factors-based approach. Both major crops (e.g., wheat and corn) and forage production were considered. For each crop or forage, information of crop area, crop calendar, farming activities and emission factors of specified operations were assembled. The key issue of inventory compilation is the choice of suitable emission factors for specified operations over different parts of the world. Through careful review of published emission factors, we modified the traditional emission factor-based model by multiplying correction coefficient factors to reflect the relationship between emission factors, soil texture, and climate conditions. Then, the temporal (i.e., monthly) and spatial (i.e., 0.5º resolution) distribution of agricultural PM10 and PM2.5 emissions from each and all operations were estimated for each crop or forage. Finally, the emissions from individual crops were aggregated to assemble a global inventory from agricultural operations. The inventory was verified by comparing the new data with the existing agricultural fugitive dust inventory in North America and Europe, as well as satellite observations of anthropogenic agricultural dust emissions.

  12. Efficient processing of two-dimensional arrays with C or C++

    USGS Publications Warehouse

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  13. A Markovian state-space framework for integrating flexibility into space system design decisions

    NASA Astrophysics Data System (ADS)

    Lafleur, Jarret M.

    The past decades have seen the state of the art in aerospace system design progress from a scope of simple optimization to one including robustness, with the objective of permitting a single system to perform well even in off-nominal future environments. Integrating flexibility, or the capability to easily modify a system after it has been fielded in response to changing environments, into system design represents a further step forward. One challenge in accomplishing this rests in that the decision-maker must consider not only the present system design decision, but also sequential future design and operation decisions. Despite extensive interest in the topic, the state of the art in designing flexibility into aerospace systems, and particularly space systems, tends to be limited to analyses that are qualitative, deterministic, single-objective, and/or limited to consider a single future time period. To address these gaps, this thesis develops a stochastic, multi-objective, and multi-period framework for integrating flexibility into space system design decisions. Central to the framework are five steps. First, system configuration options are identified and costs of switching from one configuration to another are compiled into a cost transition matrix. Second, probabilities that demand on the system will transition from one mission to another are compiled into a mission demand Markov chain. Third, one performance matrix for each design objective is populated to describe how well the identified system configurations perform in each of the identified mission demand environments. The fourth step employs multi-period decision analysis techniques, including Markov decision processes from the field of operations research, to find efficient paths and policies a decision-maker may follow. The final step examines the implications of these paths and policies for the primary goal of informing initial system selection. Overall, this thesis unifies state-centric concepts of flexibility from economics and engineering literature with sequential decision-making techniques from operations research. The end objective of this thesis’ framework and its supporting tools is to enable selection of the next-generation space systems today, tailored to decision-maker budget and performance preferences, that will be best able to adapt and perform in a future of changing environments and requirements. Following extensive theoretical development, the framework and its steps are applied to space system planning problems of (1) DARPA-motivated multiple- or distributed-payload satellite selection and (2) NASA human space exploration architecture selection.

  14. PCAL: Language Support for Proof-Carrying Authorization Systems

    DTIC Science & Technology

    2009-10-16

    behavior of a compiled program is the same as that of the source program (Theorem 4.1) and that successfully compiled programs cannot fail due to access...semantics, formalize our compilation procedure and show that it preserves the behavior of programs. For simplicity of presentation, we abstract various...H;L ` s (6) if γ :: H;L ` s then H;L ` s↘ γ′ for some γ′. We can now show that compilation preserves the behavior of programs. More precisely, if

  15. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  16. Bearing tester data compilation, analysis, and reporting and bearing math modeling

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The Shaberth bearing analysis computer program was developed for the analysis of jet engine shaft/bearing systems operating above room temperature with normal hydrocarbon lubricants. It is also possible to use this tool to evaluate the shaft bearing systems operating in cryogenics. Effects such as fluid drag, radial temperature gradients, outer race misalignments and clearance changes were simulated and evaluated. In addition, the speed and preload effects on bearing radial stiffness was evaluated. The Shaberth program was also used to provide contact stresses from which contact geometry was calculated to support other analyses such as the determination of cryogenic fluid film thickness in the contacts and evaluation of surface and subsurface stresses necessary for bearing failure evaluation. This program was a vital tool for the thermal analysis of the bearing in that it provides the heat generation rates at the rolling element/race contacts for input into a thermal model of the bearing/shaft assembly.

  17. Shuttle ground operations efficiencies/technologies study. Volume 4: Preliminary Issues Database (PIDB) catalog

    NASA Technical Reports Server (NTRS)

    Scholz, A. L.; Hart, M. T.; Lowry, D. J.

    1987-01-01

    The Preliminary Issues Database (PIDB) was assembled very early in the study as one of the fundamental tools to be used throughout the study. Data was acquired from a variety of sources and compiled in such a way that the data could be easily sorted in accordance with a number of different analytical objectives. The system was computerized to significantly expedite sorting and make it more usable. The information contained in the PIDB is summarized and the reader is provided with the capability to manually find items of interest.

  18. Verification approach for the Shuttle/Payload Contamination Evaluation computer program - Spacelab induced environment

    NASA Technical Reports Server (NTRS)

    Bareiss, L. E.

    1978-01-01

    The paper presents a compilation of the results of a systems level Shuttle/payload contamination analysis and related computer modeling activities. The current technical assessment of the contamination problems anticipated during the Spacelab program are discussed and recommendations are presented on contamination abatement designs and operational procedures based on experience gained in the field of contamination analysis and assessment, dating back to the pre-Skylab era. The ultimate test of the Shuttle/Payload Contamination Evaluation program will be through comparison of predictions with measured levels of contamination during actual flight.

  19. Science and Technology Review October/November 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearinger, J P

    2009-08-21

    This month's issue has the following articles: (1) Award-Winning Collaborations Provide Solutions--Commentary by Steven D. Liedle; (2) Light-Speed Spectral Analysis of a Laser Pulse--An optical device inspects and stops potentially damaging laser pulses; (3) Capturing Waveforms in a Quadrillionth of a Second--The femtoscope, a time microscope, improves the temporal resolution and dynamic range of conventional recording instruments; (4) Gamma-Ray Spectroscopy in the Palm of Your Hand--A miniature gamma-ray spectrometer provides increased resolution at a reduced cost; (5) Building Fusion Targets with Precision Robotics--A robotic system assembles tiny fusion targets with nanometer precision; (6) ROSE: Making Compiler Technology More Accessible--An open-sourcemore » software infrastructure makes powerful compiler techniques available to all programmers; (7) Restoring Sight to the Blind with an Artificial Retina--A retinal prosthesis could restore vision to people suffering from eye diseases; (8) Eradicating the Aftermath of War--A remotely operated system precisely locates buried land mines; (9) Compact Alignment for Diagnostic Laser Beams--A smaller, less expensive device aligns diagnostic laser beams onto targets; and (10) Securing Radiological Sources in Africa--Livermore and other national laboratories are helping African countries secure their nuclear materials.« less

  20. The HACMS program: using formal methods to eliminate exploitable bugs.

    PubMed

    Fisher, Kathleen; Launchbury, John; Richards, Raymond

    2017-10-13

    For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA's HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Authors.

  1. A Compiler and Run-time System for Network Programming Languages

    DTIC Science & Technology

    2012-01-01

    A Compiler and Run-time System for Network Programming Languages Christopher Monsanto Princeton University Nate Foster Cornell University Rob...Foster, R. Harrison, M. Freedman, C. Monsanto , J. Rexford, A. Story, and D. Walker. Frenetic: A network programming language. In ICFP, Sep 2011. [10] A

  2. Ada Compiler Validation Summary Report. Certificate Number: 890118W1. 10017 Harris Corporation, Computer Systems Division Harris Ada, Version 5.0 Harris HCX-9 Host and Harris NH-3800 Target

    DTIC Science & Technology

    1989-01-17

    6Is OBsO.[il I J)A s3 0,2O-L,-01,-5601 UNCLASSIFIED Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate Number...United States Department of Defense Washington DC 20301-3081 Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate...O RE[PP" 9 PEA= COVELRD Ada Corpiler Validation SummT, ary Repor6:Hnrris 17 Jan 19S9 to 17 Jan 1990 Corporation, Computer SYLeIns Di%ision, Harris Ada

  3. NSTX-U Advances in Real-Time C++11 on Linux

    NASA Astrophysics Data System (ADS)

    Erickson, Keith G.

    2015-08-01

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11 standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.

  4. Ada (Trade name) Compiler Validation Summary Report: Rational. Rational Environment (Trademark) A952. Rational Architecture (R1000 (Trade name) Model 200).

    DTIC Science & Technology

    1987-05-06

    Rational . Rational Environment A_9_5_2. Rational Arthitecture (R1000 Model 200) 6. PERFORMING ORG. REPORT...validation testing performed on the Rational Environment , A_9_5_2, using Version 1.8 of the Ada0 Compiler Validation Capability (ACVC). The Rational ... Environment is hosted on a Rational Architecture (R1000 Model 200) operating under Rational Environment , Release A 95 2. Programs processed by this

  5. The X-ray system of crystallographic programs for any computer having a PIDGIN FORTRAN compiler

    NASA Technical Reports Server (NTRS)

    Stewart, J. M.; Kruger, G. J.; Ammon, H. L.; Dickinson, C.; Hall, S. R.

    1972-01-01

    A manual is presented for the use of a library of crystallographic programs. This library, called the X-ray system, is designed to carry out the calculations required to solve the structure of crystals by diffraction techniques. It has been implemented at the University of Maryland on the Univac 1108. It has, however, been developed and run on a variety of machines under various operating systems. It is considered to be an essentially machine independent library of applications programs. The report includes definition of crystallographic computing terms, program descriptions, with some text to show their application to specific crystal problems, detailed card input descriptions, mass storage file structure and some example run streams.

  6. System support software for the Space Ultrareliable Modular Computer (SUMC)

    NASA Technical Reports Server (NTRS)

    Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.

    1974-01-01

    The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.

  7. A Summary of NASA Architecture Studies Utilizing Fission Surface Power Technology

    NASA Technical Reports Server (NTRS)

    Mason, Lee; Poston, Dave

    2010-01-01

    Beginning with the Exploration Systems Architecture Study in 2005, NASA has conducted various mission architecture studies to evaluate implementation options for the U.S. Space Policy (formerly the Vision for Space Exploration). Several of the studies examined the use of Fission Surface Power (FSP) systems for human missions to the lunar and Martian surface. This paper summarizes the FSP concepts developed under four different NASA-sponsored architecture studies: Lunar Architecture Team, Mars Architecture Team, Lunar Surface Systems/Constellation Architecture team, and International Architecture Working Group-Power Function team. The results include a summary of FSP design characteristics, a compilation of mission-compatible FSP configuration options, and an FSP concept-of-operations that is consistent with the overall mission objectives.

  8. Ada Compiler Validation Summary Report: Certificate Number: 940305W1. 11335 TLD Systems, Ltd. TLD Comanche VAX/i960 Ada Compiler System, Version 4.1.1 VAX Cluster under VMS 5.5 = Tronix JIAWG Execution Vehicle (i960MX) under TLD Real Time Executive, Version 4.1.1

    DTIC Science & Technology

    1994-03-14

    Comanche VAX/i960 Ada Compiler System, Version 4.1.1 Host Computer System: Digital Local Area Network VAX Cluster executing on (2) MicroVAX 3100 Model 90...31 $MAX DIGITS 15 SmNx INT 2147483647 $MAX INT PLUS_1 2147483648 $MIN IN -2_147483648 A-3 MACR PARAMEERIS $NAME NO SUCH INTEGER TYPE $NAME LIST...nested generlcs are Supported and generics defined in libary units are pexitted. zt is not possible to pen ore a macro instantiation for a generic I

  9. The New Southern FIA Data Compilation System

    Treesearch

    V. Clark Baldwin; Larry Royer

    2001-01-01

    In general, the major national Forest Inventory and Analysis annual inventory emphasis has been on data-base design and not on data processing and calculation of various new attributes. Two key programming techniques required for efficient data processing are indexing and modularization. The Southern Research Station Compilation System utilizes modular and indexing...

  10. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  11. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  12. Portable Just-in-Time Specialization of Dynamically Typed Scripting Languages

    NASA Astrophysics Data System (ADS)

    Williams, Kevin; McCandless, Jason; Gregg, David

    In this paper, we present a portable approach to JIT compilation for dynamically typed scripting languages. At runtime we generate ANSI C code and use the system's native C compiler to compile this code. The C compiler runs on a separate thread to the interpreter allowing program execution to continue during JIT compilation. Dynamic languages have variables which may change type at any point in execution. Our interpreter profiles variable types at both whole method and partial method granularity. When a frequently executed region of code is discovered, the compilation thread generates a specialized version of the region based on the profiled types. In this paper, we evaluate the level of instruction specialization achieved by our profiling scheme as well as the overall performance of our JIT.

  13. Low-Temperature Hydrothermal Resource Potential

    DOE Data Explorer

    Katherine Young

    2016-06-30

    Compilation of data (spreadsheet and shapefiles) for several low-temperature resource types, including isolated springs and wells, delineated area convection systems, sedimentary basins and coastal plains sedimentary systems. For each system, we include estimates of the accessible resource base, mean extractable resource and beneficial heat. Data compiled from USGS and other sources. The paper (submitted to GRC 2016) describing the methodology and analysis is also included.

  14. C to VHDL compiler

    NASA Astrophysics Data System (ADS)

    Berdychowski, Piotr P.; Zabolotny, Wojciech M.

    2010-09-01

    The main goal of C to VHDL compiler project is to make FPGA platform more accessible for scientists and software developers. FPGA platform offers unique ability to configure the hardware to implement virtually any dedicated architecture, and modern devices provide sufficient number of hardware resources to implement parallel execution platforms with complex processing units. All this makes the FPGA platform very attractive for those looking for efficient heterogeneous, computing environment. Current industry standard in development of digital systems on FPGA platform is based on HDLs. Although very effective and expressive in hands of hardware development specialists, these languages require specific knowledge and experience, unreachable for most scientists and software programmers. C to VHDL compiler project attempts to remedy that by creating an application, that derives initial VHDL description of a digital system (for further compilation and synthesis), from purely algorithmic description in C programming language. This idea itself is not new, and the C to VHDL compiler combines the best approaches from existing solutions developed over many previous years, with the introduction of some new unique improvements.

  15. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornung, Richard D.; Hones, Holger E.

    The RAJA Performance Suite is designed to evaluate performance of the RAJA performance portability library on a wide variety of important high performance computing (HPC) algorithmic lulmels. These kernels assess compiler optimizations and various parallel programming model backends accessible through RAJA, such as OpenMP, CUDA, etc. The Initial version of the suite contains 25 computational kernels, each of which appears in 6 variants: Baseline SequcntiaJ, RAJA SequentiaJ, Baseline OpenMP, RAJA OpenMP, Baseline CUDA, RAJA CUDA. All variants of each kernel perform essentially the same mathematical operations and the loop body code for each kernel is identical across all variants. Theremore » are a few kernels, such as those that contain reduction operations, that require CUDA-specific coding for their CUDA variants. ActuaJ computer instructions executed and how they run in parallel differs depending on the parallel programming model backend used and which optimizations are perfonned by the compiler used to build the Perfonnance Suite executable. The Suite will be used primarily by RAJA developers to perform regular assessments of RAJA performance across a range of hardware platforms and compilers as RAJA features are being developed. It will also be used by LLNL hardware and software vendor panners for new defining requirements for future computing platform procurements and acceptance testing. In particular, the RAJA Performance Suite will be used for compiler acceptance testing of the upcoming CORAUSierra machine {initial LLNL delivery expected in late-2017/early 2018) and the CORAL-2 procurement. The Suite will aJso be used to generate concise source code reproducers of compiler and runtime issues we uncover so that we may provide them to relevant vendors to be fixed.« less

  17. Automated Diagnosis and Control of Complex Systems

    NASA Technical Reports Server (NTRS)

    Kurien, James; Plaunt, Christian; Cannon, Howard; Shirley, Mark; Taylor, Will; Nayak, P.; Hudson, Benoit; Bachmann, Andrew; Brownston, Lee; Hayden, Sandra; hide

    2007-01-01

    Livingstone2 is a reusable, artificial intelligence (AI) software system designed to assist spacecraft, life support systems, chemical plants, or other complex systems by operating with minimal human supervision, even in the face of hardware failures or unexpected events. The software diagnoses the current state of the spacecraft or other system, and recommends commands or repair actions that will allow the system to continue operation. Livingstone2 is an enhancement of the Livingstone diagnosis system that was flight-tested onboard the Deep Space One spacecraft in 1999. This version tracks multiple diagnostic hypotheses, rather than just a single hypothesis as in the previous version. It is also able to revise diagnostic decisions made in the past when additional observations become available. In such cases, Livingstone might arrive at an incorrect hypothesis. Re-architecting and re-implementing the system in C++ has increased performance. Usability has been improved by creating a set of development tools that is closely integrated with the Livingstone2 engine. In addition to the core diagnosis engine, Livingstone2 includes a compiler that translates diagnostic models written in a Java-like language into Livingstone2's language, and a broad set of graphical tools for model development.

  18. Compiling mortality statistics from civil registration systems in Viet Nam: the long road ahead.

    PubMed

    Rao, Chalapati; Osterberger, Brigitta; Anh, Tran Dam; MacDonald, Malcolm; Chúc, Nguyen Thi Kim; Hill, Peter S

    2010-01-01

    Accurate mortality statistics, needed for population health assessment, health policy and research, are best derived from data in vital registration systems. However, mortality statistics from vital registration systems are not available for several countries including Viet Nam. We used a mixed methods case study approach to assess vital registration operations in 2006 in three provinces in Viet Nam (Hòa Bình, Thùa Thiên-Hué and Bình Duong), and provide recommendations to strengthen vital registration systems in the country. For each province we developed life tables from population and mortality data compiled by sex and age group. Demographic methods were used to estimate completeness of death registration as an indicator of vital registration performance. Qualitative methods (document review, key informant interviews and focus group discussions) were used to assess administrative, technical and societal aspects of vital registration systems. Completeness of death registration was low in all three provinces. Problems were identified with the legal framework for registration of early neonatal deaths and deaths of temporary residents or migrants. The system does not conform to international standards for reporting cause of death or for recording detailed statistics by age, sex and cause of death. Capacity-building along with an intersectoral coordination committee involving the Ministries of Justice and Health and the General Statistics Office would improve the vital registration system, especially with regard to procedures for death registration. There appears to be strong political support for sentinel surveillance systems to generate reliable mortality statistics in Viet Nam.

  19. An Open Source modular platform for hydrological model implementation

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Bruland, Oddbjørn

    2010-05-01

    An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not: Write or compile computer code, handle file IO for each modules, • Routine implementation and testing. Implementation of new process-simulating methods/equations, specialised objective functions or quality control routines, testing of these in an existing framework. o Need not: Implement user or model interface for the new routine, IO handling, administration of model setup and run, calibration and validation routines etc. From being developed for Norway's largest hydropower producer Statkraft, ENKI is now being turned into an Open Source project. At the time of writing, the licence and the project administration is not established. Also, it remains to port the application to other compilers and computer platforms. However, we hope that ENKI will prove useful for both academic and operational users.

  20. The Mystro system: A comprehensive translator toolkit

    NASA Technical Reports Server (NTRS)

    Collins, W. R.; Noonan, R. E.

    1985-01-01

    Mystro is a system that facilities the construction of compilers, assemblers, code generators, query interpretors, and similar programs. It provides features to encourage the use of iterative enhancement. Mystro was developed in response to the needs of NASA Langley Research Center (LaRC) and enjoys a number of advantages over similar systems. There are other programs available that can be used in building translators. These typically build parser tables, usually supply the source of a parser and parts of a lexical analyzer, but provide little or no aid for code generation. In general, only the front end of the compiler is addressed. Mystro, on the other hand, emphasizes tools for both ends of a compiler.

  1. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Technical Reports Server (NTRS)

    Ippolito, Louis J.

    1989-01-01

    The NASA Propagation Effects Handbook for Satellite Systems Design provides a systematic compilation of the major propagation effects experienced on space-Earth paths in the 10 to 100 GHz frequency band region. It provides both a detailed description of the propagation phenomenon and a summary of the impact of the effect on the communications system design and performance. Chapter 2 through 5 describe the propagation effects, prediction models, and available experimental data bases. In Chapter 6, design techniques and prediction methods available for evaluating propagation effects on space-Earth communication systems are presented. Chapter 7 addresses the system design process and how the effects of propagation on system design and performance should be considered and how that can be mitigated. Examples of operational and planned Ku, Ka, and EHF satellite communications systems are given.

  2. A Roadmap to Continuous Integration for ATLAS Software Development

    NASA Astrophysics Data System (ADS)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million lines of C++ and 1.4 million lines of python code. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI incorporation program for the ATLAS software infrastructure. It brings modern open source tools such as Jenkins and GitLab into the ATLAS Nightly System, rationalizes hardware resource allocation and administrative operations, provides improved feedback and means to fix broken builds promptly for developers. Once adopted, ATLAS CI practices will improve and accelerate innovation cycles and result in increased confidence in new software deployments. The paper reports the status of Jenkins integration with the ATLAS Nightly System as well as short and long term plans for the incorporation of CI practices.

  3. Importance of stability study of continuous systems for ethanol production.

    PubMed

    Paz Astudillo, Isabel Cristina; Cardona Alzate, Carlos Ariel

    2011-01-10

    Fuel ethanol industry presents different problems during bioreactors operation. One of them is the unexpected variation in the output ethanol concentration from the bioreactor or a drastic fall in the productivity. In this paper, a compilation of concepts and relevant results of several experimental and theoretical studies about dynamic behavior of fermentation systems for bioethanol production with Saccharomyces cerevisiae and Zymomonas mobilis is done with the purpose of understanding the stability phenomena that could affect the productivity of industries producing fuel ethanol. It is shown that the design of high scale biochemical processes for fuel ethanol production must be done based on stability studies. © 2010 Elsevier B.V. All rights reserved.

  4. Solid state technology: A compilation. [on semiconductor devices

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A compilation, covering selected solid state devices developed and integrated into systems by NASA to improve performance, is presented. Data are also given on device shielding in hostile radiation environments.

  5. The relational database model and multiple multicenter clinical trials.

    PubMed

    Blumenstein, B A

    1989-12-01

    The Southwest Oncology Group (SWOG) chose to use a relational database management system (RDBMS) for the management of data from multiple clinical trials because of the underlying relational model's inherent flexibility and the natural way multiple entity types (patients, studies, and participants) can be accommodated. The tradeoffs to using the relational model as compared to using the hierarchical model include added computing cycles due to deferred data linkages and added procedural complexity due to the necessity of implementing protections against referential integrity violations. The SWOG uses its RDBMS as a platform on which to build data operations software. This data operations software, which is written in a compiled computer language, allows multiple users to simultaneously update the database and is interactive with respect to the detection of conditions requiring action and the presentation of options for dealing with those conditions. The relational model facilitates the development and maintenance of data operations software.

  6. Low-Temperature Hydrothermal Resource Potential Estimate

    DOE Data Explorer

    Katherine Young

    2016-06-30

    Compilation of data (spreadsheet and shapefiles) for several low-temperature resource types, including isolated springs and wells, delineated area convection systems, sedimentary basins and coastal plains sedimentary systems. For each system, we include estimates of the accessible resource base, mean extractable resource and beneficial heat. Data compiled from USGS and other sources. The paper (submitted to GRC 2016) describing the methodology and analysis is also included.

  7. Writing and compiling code into biochemistry.

    PubMed

    Shea, Adam; Fett, Brian; Riedel, Marc D; Parhi, Keshab

    2010-01-01

    This paper presents a methodology for translating iterative arithmetic computation, specified as high-level programming constructs, into biochemical reactions. From an input/output specification, we generate biochemical reactions that produce output quantities of proteins as a function of input quantities performing operations such as addition, subtraction, and scalar multiplication. Iterative constructs such as "while" loops and "for" loops are implemented by transferring quantities between protein types, based on a clocking mechanism. Synthesis first is performed at a conceptual level, in terms of abstract biochemical reactions - a task analogous to high-level program compilation. Then the results are mapped onto specific biochemical reactions selected from libraries - a task analogous to machine language compilation. We demonstrate our approach through the compilation of a variety of standard iterative functions: multiplication, exponentiation, discrete logarithms, raising to a power, and linear transforms on time series. The designs are validated through transient stochastic simulation of the chemical kinetics. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.

  8. Ada Compiler Validation Summary Report: Certificate Number 890627W1. 10103 Harris Corporation, Computer Systems Division, Harris Ada, Version 5.0 Harris H1000

    DTIC Science & Technology

    1989-06-27

    Department of Defense Washington DC 20301-3081 Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate Number...890627W1.10103 Host: Harris HIOO0 under VOS, E.i Target: Harris HiO00 under VOS, E.1 Testing Completed June 27, 1989 using ACVC 1.10 This report has been...arris Corporation, Computer Systems Division Harris Ada, Version 5.0, Harris H1000 under VOS, 8.1 (Host & Target), Wright-Patterson AFB, ACVC 1.10 DD

  9. Compiler writing system detail design specification. Volume 2: Component specification

    NASA Technical Reports Server (NTRS)

    Arthur, W. J.

    1974-01-01

    The logic modules and data structures composing the Meta-translator module are desribed. This module is responsible for the actual generation of the executable language compiler as a function of the input Meta-language. Machine definitions are also processed and are placed as encoded data on the compiler library data file. The transformation of intermediate language in target language object text is described.

  10. Determination of eigenvalues of dynamical systems by symbolic computation

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1982-01-01

    A symbolic computation technique for determining the eigenvalues of dynamical systems is described wherein algebraic operations, symbolic differentiation, matrix formulation and inversion, etc., can be performed on a digital computer equipped with a formula-manipulation compiler. An example is included that demonstrates the facility with which the system dynamics matrix and the control distribution matrix from the state space formulation of the equations of motion can be processed to obtain eigenvalue loci as a function of a system parameter. The example chosen to demonstrate the technique is a fourth-order system representing the longitudinal response of a DC 8 aircraft to elevator inputs. This simplified system has two dominant modes, one of which is lightly damped and the other well damped. The loci may be used to determine the value of the controlling parameter that satisfied design requirements. The results were obtained using the MACSYMA symbolic manipulation system.

  11. A Literature Review and Compilation of Nuclear Waste Management System Attributes for Use in Multi-Objective System Evaluations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinina, Elena Arkadievna; Samsa, Michael

    The purpose of this work was to compile a comprehensive initial set of potential nuclear waste management system attributes. This initial set of attributes is intended to serve as a starting point for additional consideration by system analysts and planners to facilitate the development of a waste management system multi-objective evaluation framework based on the principles and methodology of multi-attribute utility analysis. The compilation is primarily based on a review of reports issued by the Canadian Nuclear Waste Management Organization (NWMO) and the Blue Ribbon Commission on America's Nuclear Future (BRC), but also an extensive review of the available literaturemore » for similar and past efforts as well. Numerous system attributes found in different sources were combined into a single objectives-oriented hierarchical structure. This study provides a discussion of the data sources and the descriptions of the hierarchical structure. A particular focus of this study was on collecting and compiling inputs from past studies that involved the participation of various external stakeholders. However, while the important role of stakeholder input in a country's waste management decision process is recognized in the referenced sources, there are only a limited number of in-depth studies of the stakeholders' differing perspectives. Compiling a comprehensive hierarchical listing of attributes is a complex task since stakeholders have multiple and often conflicting interests. The BRC worked for two years (January 2010 to January 2012) to "ensure it has heard from as many points of view as possible." The Canadian NWMO study took four years and ample resources, involving national and regional stakeholders' dialogs, internet-based dialogs, information and discussion sessions, open houses, workshops, round tables, public attitude research, website, and topic reports. The current compilation effort benefited from the distillation of these many varied inputs conducted by the previous studies.« less

  12. Analytical techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.

  13. Extending R packages to support 64-bit compiled code: An illustration with spam64 and GIMMS NDVI3g data

    NASA Astrophysics Data System (ADS)

    Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard

    2017-07-01

    Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.

  14. Why do airlines want and use thrust reversers? A compilation of airline industry responses to a survey regarding the use of thrust reversers on commercial transport airplanes

    NASA Technical Reports Server (NTRS)

    Yetter, Jeffrey A.

    1995-01-01

    Although thrust reversers are used for only a fraction of the airplane operating time, their impact on nacelle design, weight, airplane cruise performance, and overall airplane operating and maintenance expenses is significant. Why then do the airlines want and use thrust reversers? In an effort to understand the airlines need for thrust reversers, a survey of the airline industry was made to determine why and under what situations thrust reversers are currently used or thought to be needed. The survey was intended to help establish the cost/benefits trades for the use of thrust reversers and airline opinion regarding alternative deceleration devices. A compilation and summary of the responses given to the survey questionnaire is presented.

  15. Valves and other mechanical components and equipment: A compilation

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The articles in this Compilation will be of interest to mechanical engineers, users and designers of machinery, and to those engineers and manufacturers specializing in fluid handling systems. Section 1 describes a number of valves and valve systems. Section 2 contains articles on machinery and mechanical devices that may have applications in a number of different areas.

  16. Obtaining correct compile results by absorbing mismatches between data types representations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementingmore » step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.« less

  17. Obtaining correct compile results by absorbing mismatches between data types representations

    DOEpatents

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2017-03-21

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  18. Obtaining correct compile results by absorbing mismatches between data types representations

    DOEpatents

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2017-11-21

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  19. The Ada (Trade Name) Compiler Validation Capability Implementers’ Guide. Version 1,

    DTIC Science & Technology

    1986-12-01

    f ADA20.ISI.EDU using the same format as for comments on the RM: Isection x.y.z(pp) Commenter’s Name YY-MM-". Iversion v Itopic brief description of...Operations (Integer) .............. 4-63 4.5.2.e Reltionl and Membership Operations (FixedtFloat) ........... 4-64 4.5.2. f Relational and Membership...Absolute Value Operator (PIxe&M ) .................. 4 4.5.6. Scalar Negation Operator 4.9 " 4.5.6. f Array Negation Operator

  20. Modeling a maintenance simulation of the geosynchronous platform

    NASA Technical Reports Server (NTRS)

    Kleiner, A. F., Jr.

    1980-01-01

    A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.

  1. Sanitary Engineering Unit Operations and Unit Processes Laboratory Manual.

    ERIC Educational Resources Information Center

    American Association of Professors in Sanitary Engineering.

    This manual contains a compilation of experiments in Physical Operations, Biological and Chemical Processes for various education and equipment levels. The experiments are designed to be flexible so that they can be adapted to fit the needs of a particular program. The main emphasis is on hands-on student experiences to promote understanding.…

  2. Integrating Emerging Data Sources into Operational Practice: Capabilities and Limitations of Devices to Collect, Compile, Save, and Share Messages from CAVs and Connected Travelers

    DOT National Transportation Integrated Search

    2018-03-01

    Connected and automated vehicles (CAVs) and connected travelers will be providing substantially increased levels of data which will be available for agencies to consider using to improve the management and operation of the surface transportation syst...

  3. International data collection and analysis. Task 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-04-01

    Commercial nuclear power has grown to the point where 13 nations now operate commercial nuclear power plants. Another four countries should join this list before the end of 1980. In the Nonproliferation Alternative Systems Assessment Program (NASAP), the US DOE is evaluating a series of alternate possible power systems. The objective is to determine practical nuclear systems which could reduce proliferation risk while still maintaining the benefits of nuclear power. Part of that effort is the development of a data base denoting the energy needs, resources, technical capabilities, commitment to nuclear power, and projected future trends for various non-US countries.more » The data are presented by country for each of 28 non-US countries. This volume contains compiled data on Mexico, Netherlands, Pakistan, Philippines, South Africa, South Korea, and Spain.« less

  4. Idaho National Laboratory Emergency Readiness Assurance Plan - Fiscal Year 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, Carl J.

    Department of Energy Order 151.1C, Comprehensive Emergency Management System requires that each Department of Energy field element documents readiness assurance activities, addressing emergency response planning and preparedness. Battelle Energy Alliance, LLC, as prime contractor at the Idaho National Laboratory (INL), has compiled this Emergency Readiness Assurance Plan to provide this assurance to the Department of Energy Idaho Operations Office. Stated emergency capabilities at the INL are sufficient to implement emergency plans. Summary tables augment descriptive paragraphs to provide easy access to data. Additionally, the plan furnishes budgeting, personnel, and planning forecasts for the next 5 years.

  5. Identification and evaluation of educational uses and users for the STS. Educational planning for utilization of space shuttle ED-PLUSS

    NASA Technical Reports Server (NTRS)

    Engle, H. A.; Christensen, D. L.

    1974-01-01

    A planning and feasibility study to identify and document a methodology needed to incorporate educational programs into future missions and operations of the space transportation system was conducted. Six tasks were identified and accomplished during the study. The task statements are as follows: (1) potential user identification, (2) a review of space education programs, (3) development of methodology for user involvement, (4) methods to encourage user awareness, (5) compilation of follow-on ideas, and (6) response to NASA questions. Specific recommendations for improving the educational coverage of space activities are provided.

  6. Oceanography from satellites

    NASA Technical Reports Server (NTRS)

    Wilson, W. S.

    1981-01-01

    It is pointed out that oceanographers have benefited from the space program mainly through the increased efficiency it has brought to ship operations. For example, the Transit navigation system has enabled oceanographers to compile detailed maps of sea-floor properties and to more accurately locate moored subsurface instrumentation. General descriptions are given of instruments used in satellite observations (altimeter, color scanner, infrared radiometer, microwave radiometer, scatterometer, synthetic aperture radar). It is pointed out that because of the large volume of data that satellite instruments generate, the development of algorithms for converting the data into a form expressed in geophysical units has become especially important.

  7. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    PubMed

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  8. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    PubMed Central

    Thomson, Robert C.

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  9. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  10. Multitasking kernel for the C and Fortran programming languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, E.D. III

    1984-09-01

    A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.

  11. Program Operating Procedures for the Integrated Command ASW Prediction System (ICAPS). Volume 1, Revision A.

    DTIC Science & Technology

    1981-06-01

    RETIIN. •4 C H’T SE’LE(T Till’E S()NAR S TO EI Sonai r’ types are specified INCI.’I)EI) IN TIllI. RAN( G ’ individually. Prompt repeats PR I(’ICTION...complete revision, Revision A. Symbols are not used in this revision to identify changes with respect to the pre- vious issue, due to the exten- g siveness...maintenance such as source editing, compiling, and debugging. In addition, it provides the user with a simple and uniform interface for transfcrrin g files of

  12. NSTX-U Advances in Real-Time C++11 on Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Keith G.

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less

  13. NSTX-U Advances in Real-Time C++11 on Linux

    DOE PAGES

    Erickson, Keith G.

    2015-08-14

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less

  14. 50 CFR 218.241 - Adaptive management.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Low Frequency Active (SURTASS LFA) Sonar § 218.241 Adaptive management. NMFS may modify (including...) Results from the Navy's monitoring from the previous year's operation of SURTASS LFA sonar. (b) Compiled...

  15. 50 CFR 218.241 - Adaptive management.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Low Frequency Active (SURTASS LFA) Sonar § 218.241 Adaptive management. NMFS may modify (including...) Results from the Navy's monitoring from the previous year's operation of SURTASS LFA sonar. (b) Compiled...

  16. 50 CFR 218.241 - Adaptive management.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Low Frequency Active (SURTASS LFA) Sonar § 218.241 Adaptive management. NMFS may modify (including...) Results from the Navy's monitoring from the previous year's operation of SURTASS LFA sonar. (b) Compiled...

  17. An Optimizing Compiler for Petascale I/O on Leadership Class Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Kandemir, Mahmut

    In high-performance computing systems, parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizesmore » the major achievements of the project and also points out promising future directions.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wickstrom, Gregory Lloyd; Gale, Jason Carl; Ma, Kwok Kee

    The Sandia Secure Processor (SSP) is a new native Java processor that has been specifically designed for embedded applications. The SSP's design is a system composed of a core Java processor that directly executes Java bytecodes, on-chip intelligent IO modules, and a suite of software tools for simulation and compiling executable binary files. The SSP is unique in that it provides a way to control real-time IO modules for embedded applications. The system software for the SSP is a 'class loader' that takes Java .class files (created with your favorite Java compiler), links them together, and compiles a binary. Themore » complete SSP system provides very powerful functionality with very light hardware requirements with the potential to be used in a wide variety of small-system embedded applications. This paper gives a detail description of the Sandia Secure Processor and its unique features.« less

  19. 20 CFR 637.230 - Use of incentive bonuses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... in paragraph (d) of this section, technical assistance, data and information collection and compilation, management information systems, post-program followup activities, and research and evaluation... information collection and compilation, recordkeeping, or the preparation of applications for incentive...

  20. Harmonised information exchange between decentralised food composition database systems.

    PubMed

    Pakkala, H; Christensen, T; de Victoria, I Martínez; Presser, K; Kadvan, A

    2010-11-01

    The main aim of the European Food Information Resource (EuroFIR) project is to develop and disseminate a comprehensive, coherent and validated data bank for the distribution of food composition data (FCD). This can only be accomplished by harmonising food description and data documentation and by the use of standardised thesauri. The data bank is implemented through a network of local FCD storages (usually national) under the control and responsibility of the local (national) EuroFIR partner. The implementation of the system based on the EuroFIR specifications is under development. The data interchange happens through the EuroFIR Web Services interface, allowing the partners to implement their system using methods and software suitable for the local computer environment. The implementation uses common international standards, such as Simple Object Access Protocol, Web Service Description Language and Extensible Markup Language (XML). A specifically constructed EuroFIR search facility (eSearch) was designed for end users. The EuroFIR eSearch facility compiles queries using a specifically designed Food Data Query Language and sends a request to those network nodes linked to the EuroFIR Web Services that will most likely have the requested information. The retrieved FCD are compiled into a specifically designed data interchange format (the EuroFIR Food Data Transport Package) in XML, which is sent back to the EuroFIR eSearch facility as the query response. The same request-response operation happens in all the nodes that have been selected in the EuroFIR eSearch facility for a certain task. Finally, the FCD are combined by the EuroFIR eSearch facility and delivered to the food compiler. The implementation of FCD interchange using decentralised computer systems instead of traditional data-centre models has several advantages. First of all, the local partners have more control over their FCD, which will increase commitment and improve quality. Second, a multicentred solution is more economically viable than the creation of a centralised data bank, because of the lack of national political support for multinational systems.

  1. Ada Compiler Validation Summary Report. Certificate Number: 921004W1. 11281, Verdix Corporation VADS System V/386/486 VAda-110-3232, Version 6.1, AST Premium 486 under UNIX System V, Release 4.0.

    DTIC Science & Technology

    1992-11-18

    Rev. 2-89) Prescribed by ANSI Std. 239-128 AVr Control Number: AVF-VSR-542-1092 Date VSR Complete: 18 November 1992 92-06-23- vRx Ada COMPILER...System: AST Premium 486 under UNIX System V, Release 4.0 Customer Agreement Number: 92-06-23- VRX See section 3.1 for any additional information about

  2. Ada Compiler Validation Summary Report. Certificate Number: 900726W1. 11017, Verdix Corporation VADS IBM RISC System/6000, AIX 3.1, VAda-110-7171, Version 6.0 IBM RISC System/6000 Model 530 = IBM RISC System/6000 Model 530

    DTIC Science & Technology

    1991-01-22

    Customer Agreement Number: 90-05-29- VRX See Section 3.1 for any additional information about the testing environment. As a result of this validation...22 January 1991 90-05-29- VRX Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 900726W1.11017 Verdix Corporation VADS IBM RISC System/6000

  3. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    DTIC Science & Technology

    2015-04-10

    develop a powerful system for auto-tuning of library routines and compute-intensive kernels, driven by the Pluto system for multicores that we are...kernels, driven by the Pluto system for multicores that we are developing. The work here is motivated by recent advances in two major areas of...automatic C-to-CUDA code generator using a polyhedral compiler transformation framework. We have used and adapted PLUTO (our state-of-the-art tool

  4. Developing and integrating an adverse drug reaction reporting system with the hospital information system.

    PubMed

    Kataoka, Satoshi; Ohe, Kazuhiko; Mochizuki, Mayumi; Ueda, Shiro

    2002-01-01

    We have developed an adverse drug reaction (ADR) reporting system integrating it with Hospital Information System (HIS) of the University of Tokyo Hospital. Since this system is designed with JAVA, it is portable without re-compiling to any operating systems on which JAVA virtual machines work. In this system, we implemented an automatic data filling function using XML-based (extended Markup Language) files generated by HIS. This new specification would decrease the time needed for physicians and pharmacists to fill the spontaneous ADR reports. By clicking a button, the report is sent to the text database through Simple Mail Transfer Protocol (SMTP) electronic mails. The destination of the report mail can be changed arbitrarily by administrators, which adds this system more flexibility for practical operation. Although we tried our best to use the SGML-based (Standard Generalized Markup Language) ICH M2 guideline to follow the global standard of the case report, we eventually adopted XML as the output report format. This is because we found some problems in handling two bytes characters with ICH guideline and XML has a lot of useful features. According to our pilot survey conducted at the University of Tokyo Hospital, many physicians answered that our idea, integrating ADR reporting system to HIS, would increase the ADR reporting numbers.

  5. Designing Efficient and Effective, Operationally Relevant, High Altitude Training Profiles

    DTIC Science & Technology

    2001-06-01

    Operational Medical Issues in Hypo-and Hyperbaric Conditions [les Questions medicales a caractere oprationel liees aux conditions hypobares ou... hyperbares ] To order the complete compilation report, use: ADA395680 The component part is provided here to allow users access to individually authored...Airforce was felt to meet this need and was recommended. Paper presented at the RTO HFM Symposium on "Operational Medical Issues in Hypo- and Hyperbaric

  6. Effective energy data management for low-carbon growth planning: An analytical framework for assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bo; Evans, Meredydd; Yu, Sha

    Readily available and reliable energy data is fundamental to effective analysis and policymaking for the energy sector. Energy statistics of high quality, systematically compiled and effectively disseminated, not only support governments to ensure national security and evaluate energy policies, but they also guide investment decisions in both the private and public sectors. Because of energy’s close link to greenhouse gas emissions, energy data has a particularly important role in assessing emissions and strategies to reduce emissions. In this study, energy data management in four countries – Canada, Germany, the United Kingdom and the United States – are examined from bothmore » organizational and operational perspectives. With insights from these best practices, we present a framework for the evaluation of national energy data management systems. It can be used by national statistics compilers to assess their chosen model and to identify areas for improvement. We then use India as a test case for this framework. Its government is working to enhance India’s energy data management to improve sustainable growth planning.« less

  7. Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution

    NASA Astrophysics Data System (ADS)

    Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin

    2018-04-01

    The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.

  8. User’s Manual for the National Water Information System of the U.S. Geological Survey: Aggregate Water-Use Data System, Version 3.2

    USGS Publications Warehouse

    Nawyn, John P.; Sargent, B. Pierre; Hoopes, Barbara; Augenstein, Todd; Rowland, Kathleen M.; Barber, Nancy L.

    2017-10-06

    The Aggregate Water-Use Data System (AWUDS) is the database management system used to enter, store, and analyze state aggregate water-use data. It is part of the U.S. Geological Survey National Water Information System. AWUDS has a graphical user interface that facilitates data entry, revision, review, and approval. This document provides information on the basic functions of AWUDS and the steps for carrying out common tasks that are a part of compiling an aggregated dataset. Also included are explanations of terminology and descriptions of user-interface structure, procedures for using the AWUDS operations, and dataset-naming conventions. Information on water-use category definitions, data-collection methods, and data sources are found in the report “Guidelines for preparation of State water-use estimates,” available at https://pubs.er.usgs.gov/publication/ofr20171029.

  9. A Computer Program for Drip Irrigation System Design for Small Plots

    NASA Astrophysics Data System (ADS)

    Philipova, Nina; Nicheva, Olga; Kazandjiev, Valentin; Chilikova-Lubomirova, Mila

    2012-12-01

    A computer programhas been developed for design of surface drip irrigation system. It could be applied for calculation of small scale fields with an area up to 10 ha. The program includes two main parts: crop water requirements and hydraulic calculations of the system. It has been developed in Graphical User Interface in MATLAB and gives opportunity for selecting some parameters from tables such as: agro- physical soil properties, characteristics of the corresponding crop, climatic data. It allows the user of the program to assume and set a definite value, for example the emitter discharge, plot parameters and etc. Eight cases of system layout according to the water source layout and the number of plots of the system operation are laid into hydraulic section of the program. It includes the design of lateral, manifold, main line and pump calculations. The program has been compiled to work in Windows.

  10. Investing in a Surgical Outcomes Auditing System

    PubMed Central

    Bermudez, Luis; Trost, Kristen; Ayala, Ruben

    2013-01-01

    Background. Humanitarian surgical organizations consider both quantity of patients receiving care and quality of the care provided as a measure of success. However, organizational efficacy is often judged by the percent of resources spent towards direct intervention/surgery, which may discourage investment in an outcomes monitoring system. Operation Smile's established Global Standards of Care mandate minimum patient followup and quality of care. Purpose. To determine whether investment of resources in an outcomes monitoring system is necessary and effectively measures success. Methods. This paper analyzes the quantity and completeness of data collected over the past four years and compares it against changes in personnel and resources assigned to the program. Operation Smile began investing in multiple resources to obtain the missing data necessary to potentially implement a global Surgical Outcomes Auditing System. Existing personnel resources were restructured to focus on postoperative program implementation, data acquisition and compilation, and training materials used to educate local foundation and international employees. Results. An increase in the number of postoperative forms and amount of data being submitted to headquarters occurred. Conclusions. Humanitarian surgical organizations would benefit from investment in a surgical outcomes monitoring system in order to demonstrate success and to ameliorate quality of care. PMID:23401763

  11. 25 CFR 700.273 - Request for notification of existence of records: Action on.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... records were compiled in reasonable anticipation of a civil action or proceeding or (ii) the system of.... (2) If the records were compiled in reasonable anticipation of a civil action or proceeding or the...

  12. Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramshaw, M. J.

    2017-07-28

    Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less

  13. Principles and techniques of polarimetric mapping.

    NASA Technical Reports Server (NTRS)

    Halajian, J.; Hallock, H.

    1973-01-01

    This paper introduces the concept and potential value of polarimetric maps and the techniques for generating these maps in operational remote sensing. The application-oriented polarimetric signature analyses in the literature are compiled, and several optical models are illustrated to bring out requirements of a sensor system for polarimetric mapping. By use of the concepts of Stokes parameters the descriptive specification of one sensor system is refined. The descriptive specification for a multichannel digital photometric-polarimetric mapper is based upon our experience with the present single channel device which includes the generation of polarimetric maps and pictures. High photometric accuracy and stability coupled with fast, accurate digital output has enabled us to overcome the handicap of taking sequential data from the same terrain.

  14. The 1975 Ride Quality Symposium

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation is presented of papers reported at the 1975 Ride Quality Symposium held in Williamsburg, Virginia, August 11-12, 1975. The symposium, jointly sponsored by NASA and the United States Department of Transportation, was held to provide a forum for determining the current state of the art relative to the technology base of ride quality information applicable to current and proposed transportation systems. Emphasis focused on passenger reactions to ride environment and on implications of these reactions to the design and operation of air, land, and water transportation systems acceptable to the traveling public. Papers are grouped in the following five categories: needs and uses for ride quality technology, vehicle environments and dynamics, investigative approaches and testing procedures, experimental ride quality studies, and ride quality modeling and criteria.

  15. Update on Geothermal Direct-Use Installations in the United States

    DOE Data Explorer

    Beckers, Koenraad F.; Snyder, Diana M.; Young, Katherine R.

    2017-03-02

    An updated database of geothermal direct-use systems in the U.S. has been compiled and analyzed, building upon the Oregon Institute of Technology (OIT) Geo-Heat Center direct-use database. Types of direct-use applications examined include hot springs resorts and pools, aquaculture farms, greenhouses, and district heating systems, among others; power-generating facilities and ground-source heat pumps were excluded. Where possible, the current operation status, open and close dates, well data, and other technical data were obtained for each entry. The database contains 545 installations, of which 407 are open, 108 are closed, and 30 have an unknown status. A report is also included which details and analyzes current geothermal direct-use installations and barriers to further implementation.

  16. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  17. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  18. Bearing tester data compilation, analysis, and reporting and bearing math modeling

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A test condition data base was developed for the Bearing and Seal Materials Tester (BSMT) program which permits rapid retrieval of test data for trend analysis and evaluation. A model was developed for the Space shuttle Main Engine (SSME) Liquid Oxygen (LOX) turbopump shaft/bearing system. The model was used to perform parametric analyses to determine the sensitivity of bearing operating characteristics and temperatures to variations in: axial preload, contact friction, coolant flow and subcooling, heat transfer coefficients, outer race misalignments, and outer race to isolator clearances. The bearing program ADORE (Advanced Dynamics of Rolling Elements) was installed on the UNIVAC 1100/80 computer system and is operational. ADORE is an advanced FORTRAN computer program for the real time simulation of the dynamic performance of rolling bearings. A model of the 57 mm turbine-end bearing is currently being checked out. Analyses were conducted to estimate flow work energy for several flow diverter configurations and coolant flow rates for the LOX BSMT.

  19. Compile-Time Schedulability Analysis of Communicating Concurrent Programs

    DTIC Science & Technology

    2006-06-28

    synchronize via the read and write operations on the FIFO channels. These operations have been implemented with the help of semaphores , which...3 1.1.2 Synchronous Dataflow . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.3 Boolean Dataflow...described by concurrent programs . . . . . . . . . 4 1.3 A synchronous dataflow model, its topology matrix, and repetition vector . 10 1.4 Select and

  20. Handbook of Information Relevant to Manpower Agencies: A Compilation of Practice Principles and Strategies for Manpower Operations.

    ERIC Educational Resources Information Center

    Erfurt, John C.; And Others

    Concepts of internal agency structure and operations, agency-company relations, and agency-enrollee relations, with recommendations for their implementation, form the three main sections of this handbook developed for manpower agency administrators, supervisory staffs and program planners. It is designed to aid those who organize and develop…

  1. Designing Day Care: A Resource Manual for Development of Child Care Services.

    ERIC Educational Resources Information Center

    Jones, Jacquelyn O.

    Compiled to promote the development of high quality, affordable, and accessible day care programs in West Tennessee, this manual helps prospective child caregivers decide which kind of day care to operate and describes start-up steps and program operation. Section 1 focuses on five basic questions of potential caregivers: (1) Which type of child…

  2. Analysis of general aviation single-pilot IFR incident data obtained from the NASA Aviation Safety Reporting System

    NASA Technical Reports Server (NTRS)

    Bergeron, H. P.

    1983-01-01

    An analysis of incident data obtained from the NASA Aviation Safety Reporting System (ASRS) has been made to determine the problem areas in general aviation single-pilot IFR (SPIFR) operations. The Aviation Safety Reporting System data base is a compilation of voluntary reports of incidents from any person who has observed or been involved in an occurrence which was believed to have posed a threat to flight safety. This paper examines only those reported incidents specifically related to general aviation single-pilot IFR operations. The frequency of occurrence of factors related to the incidents was the criterion used to define significant problem areas and, hence, to suggest where research is needed. The data was cataloged into one of five major problem areas: (1) controller judgment and response problems, (2) pilot judgment and response problems, (3) air traffic control (ATC) intrafacility and interfacility conflicts, (4) ATC and pilot communication problems, and (5) IFR-VFR conflicts. In addition, several points common to all or most of the problems were observed and reported. These included human error, communications, procedures and rules, and work load.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mizell, D.; Carter, S.

    In 1987, ISI's parallel distributed computing research group implemented a prototype sequential simulation system, designed for high-level simulation of candidate (Strategic Defense Initiative) architectures. A main design goal was to produce a simulation system that could incorporate non-trivial, executable representations of battle-management computations on each platform that were capable of controlling the actions of that platform throughout the simulation. The term BMA (battle manager abstraction) was used to refer to these simulated battle-management computations. In the authors first version of the simulator, the BMAs were C++ programs that we wrote and manually inserted into the system. Since then, they havemore » designed and implemented KMAC, a high-level language for writing BMA's. The KMAC preprocessor, built using the Unix tools lex 2 and YACC 3, translates KMAC source programs into C++ programs and passes them on to the C++ compiler. The KMAC preprocessor was incorporated into and operates under the control of the simulator's interactive user interface. After the KMAC preprocessor has translated a program into C++, the user interface system invokes the C++ compiler, and incorporates the resulting object code into the simulator load module for execution as part of a simulation run. This report describes the KMAC language and its preprocessor. Section 2 provides background material on the design of the simulation system that is necessary for understanding some of the parts of KMAC and some of the reasons it is structured the way it is. Section 3 describes the syntax and semantics of the language, and Section 4 discusses design of the preprocessor.« less

  4. CUMBIN - CUMULATIVE BINOMIAL PROGRAMS

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.

  5. Third Congress on Information System Science and Technology

    DTIC Science & Technology

    1968-04-01

    versions of the same compiler. The " fast compile-slow execute" and the "slow compile- fast execute" gimmick is the greatest hoax ever per- petrated on the... fast such natural language analysis and translation can be accomplished. If the fairly superficial syntactic anal- ysis of a sentence which is...two kinds of computer: a fast computer with large immediate access and bulk memory for rear echelon and large installation em- ployment, and a

  6. System Data Model (SDM) Source Code

    DTIC Science & Technology

    2012-08-23

    CROSS_COMPILE=/opt/gumstix/build_arm_nofpu/staging_dir/bin/arm-linux-uclibcgnueabi- 8 : CC=$(CROSS_COMPILE)gcc 9: CXX=$(CROSS_COMPILE)g++ 10 : AR...and flags to pass to it 6: LEX=flex 7: LEXFLAGS=-B 8 : 9: ## The parser generator to invoke and flags to pass to it 10 : YACC=bison 11: YACCFLAGS...5: # Point to default PetaLinux root directory 6: ifndef ROOTDIR 7: ROOTDIR=$(PETALINUX)/software/petalinux-dist 8 : endif 9: 10 : PATH:=$(PATH

  7. Integrated software health management for aerospace guidance, navigation, and control systems: A probabilistic reasoning approach

    NASA Astrophysics Data System (ADS)

    Mbaya, Timmy

    Embedded Aerospace Systems have to perform safety and mission critical operations in a real-time environment where timing and functional correctness are extremely important. Guidance, Navigation, and Control (GN&C) systems substantially rely on complex software interfacing with hardware in real-time; any faults in software or hardware, or their interaction could result in fatal consequences. Integrated Software Health Management (ISWHM) provides an approach for detection and diagnosis of software failures while the software is in operation. The ISWHM approach is based on probabilistic modeling of software and hardware sensors using a Bayesian network. To meet memory and timing constraints of real-time embedded execution, the Bayesian network is compiled into an Arithmetic Circuit, which is used for on-line monitoring. This type of system monitoring, using an ISWHM, provides automated reasoning capabilities that compute diagnoses in a timely manner when failures occur. This reasoning capability enables time-critical mitigating decisions and relieves the human agent from the time-consuming and arduous task of foraging through a multitude of isolated---and often contradictory---diagnosis data. For the purpose of demonstrating the relevance of ISWHM, modeling and reasoning is performed on a simple simulated aerospace system running on a real-time operating system emulator, the OSEK/Trampoline platform. Models for a small satellite and an F-16 fighter jet GN&C (Guidance, Navigation, and Control) system have been implemented. Analysis of the ISWHM is then performed by injecting faults and analyzing the ISWHM's diagnoses.

  8. The NASA earth resources spectral information system: A data compilation

    NASA Technical Reports Server (NTRS)

    Leeman, V.; Earing, D.; Vincent, R. K.; Ladd, S.

    1971-01-01

    The NASA Earth Resources Spectral Information System and the information contained therein are described. It contains an ordered, indexed compilation of natural targets in the optical region from 0.3 to 45.0 microns. The data compilation includes approximately 100 rock and mineral, 2600 vegetation, 1000 soil, and 60 water spectral reflectance, transmittance, and emittance curves. Most of the data have been categorized by subject, and the curves in those subject areas have been plotted on a single graph. Those categories with too few curves and miscellaneous categories have been plotted as single-curve graphs. Each graph, composite of single, is fully titled to indicate curve source and is indexed by subject to facilitate user retrieval.

  9. An integrated dexterous robotic testbed for space applications

    NASA Technical Reports Server (NTRS)

    Li, Larry C.; Nguyen, Hai; Sauer, Edward

    1992-01-01

    An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  11. Compiled MPI: Cost-Effective Exascale Applications Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Quinlan, D; Lumsdaine, A

    2012-04-10

    The complexity of petascale and exascale machines makes it increasingly difficult to develop applications that can take advantage of them. Future systems are expected to feature billion-way parallelism, complex heterogeneous compute nodes and poor availability of memory (Peter Kogge, 2008). This new challenge for application development is motivating a significant amount of research and development on new programming models and runtime systems designed to simplify large-scale application development. Unfortunately, DoE has significant multi-decadal investment in a large family of mission-critical scientific applications. Scaling these applications to exascale machines will require a significant investment that will dwarf the costs of hardwaremore » procurement. A key reason for the difficulty in transitioning today's applications to exascale hardware is their reliance on explicit programming techniques, such as the Message Passing Interface (MPI) programming model to enable parallelism. MPI provides a portable and high performance message-passing system that enables scalable performance on a wide variety of platforms. However, it also forces developers to lock the details of parallelization together with application logic, making it very difficult to adapt the application to significant changes in the underlying system. Further, MPI's explicit interface makes it difficult to separate the application's synchronization and communication structure, reducing the amount of support that can be provided by compiler and run-time tools. This is in contrast to the recent research on more implicit parallel programming models such as Chapel, OpenMP and OpenCL, which promise to provide significantly more flexibility at the cost of reimplementing significant portions of the application. We are developing CoMPI, a novel compiler-driven approach to enable existing MPI applications to scale to exascale systems with minimal modifications that can be made incrementally over the application's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.« less

  12. The Distribution of Cloud to Ground Lightning Strike Intensities and Associated Magnetic Inductance Fields Near the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Decker, Ryan

    2005-01-01

    Lightning strike location and peak current are monitored operationally in the Kennedy Space Center (KSC) Cape Canaveral Air Force Station (CCAFS) area by the Cloud to Ground Lightning Surveillance System (CGLSS). The present study compiles ten years worth of CGLSS data into a database of near strikes. Using shuffle launch platform LP39A as a convenient central point, all strikes recorded within a 20-mile radius for the period of record O R ) from January 1, 1993 to December 31,2002 were included in the subset database. Histograms and cumulative probability curves are produced for both strike intensity (peak current, in kA) and the corresponding magnetic inductance fields (in A/m). Results for the full POR have application to launch operations lightning monitoring and post-strike test procedures.

  13. An overview on integrated data system for archiving and sharing marine geology and geophysical data in Korea Institute of Ocean Science & Technology (KIOST)

    NASA Astrophysics Data System (ADS)

    Choi, Sang-Hwa; Kim, Sung Dae; Park, Hyuk Min; Lee, SeungHa

    2016-04-01

    We established and have operated an integrated data system for managing, archiving and sharing marine geology and geophysical data around Korea produced from various research projects and programs in Korea Institute of Ocean Science & Technology (KIOST). First of all, to keep the consistency of data system with continuous data updates, we set up standard operating procedures (SOPs) for data archiving, data processing and converting, data quality controls, and data uploading, DB maintenance, etc. Database of this system comprises two databases, ARCHIVE DB and GIS DB for the purpose of this data system. ARCHIVE DB stores archived data as an original forms and formats from data providers for data archive and GIS DB manages all other compilation, processed and reproduction data and information for data services and GIS application services. Relational data management system, Oracle 11g, adopted for DBMS and open source GIS techniques applied for GIS services such as OpenLayers for user interface, GeoServer for application server, PostGIS and PostgreSQL for GIS database. For the sake of convenient use of geophysical data in a SEG Y format, a viewer program was developed and embedded in this system. Users can search data through GIS user interface and save the results as a report.

  14. Electronic test and calibration circuits, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A wide variety of simple test calibration circuits are compiled for the engineer and laboratory technician. The majority of circuits were found inexpensive to assemble. Testing electronic devices and components, instrument and system test, calibration and reference circuits, and simple test procedures are presented.

  15. Survey on Safety Incentive Plans

    DOT National Transportation Integrated Search

    1951-10-01

    The Administrative Committee of the ATA Small Operations Division, with the assistance of the ATA Department of Personnel and Accident Prevention, is sponsoring a series of related compilations, based upon brief surveys among ATA member companies tha...

  16. Remote Monitoring, Inorganic Monitoring

    EPA Science Inventory

    This chapter provides an overview of applicability, amenability, and operating parameter ranges for various inorganic parameters:this chapter will also provide a compilation of existing and new online technologies for determining inorganic compounds in water samples. A wide vari...

  17. Risk management technique for liquefied natural gas facilities

    NASA Technical Reports Server (NTRS)

    Fedor, O. H.; Parsons, W. N.

    1975-01-01

    Checklists have been compiled for planning, design, construction, startup and debugging, and operation of liquefied natural gas facilities. Lists include references to pertinent safety regulations. Methods described are applicable to handling of other hazardous materials.

  18. Multinational Activities of Major U.S. Automotive Producers : Volume 1. Summary.

    DOT National Transportation Integrated Search

    1978-09-01

    The multinational activities of General Motors, Ford, Chrysler, and American Motors are documented and analyzed. The study contains a compilation of data related to multinational operations; specifically it addresses research, development, engineerin...

  19. FY94 CAG trip reports, CAG memos and other products: Volume 2. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-15

    The Yucca Mountain Site Characterization Project (YMP) of the US DOE is tasked with designing, constructing, and operating an Exploratory Studies Facility (ESF) at Yucca Mountain, Nevada. The purpose of the YMP is to provide detailed characterization of the Yucca Mountain site for the potential mined geologic repository for permanent disposal of high-level radioactive waste. Detailed characterization of properties of the site are to be conducted through a wide variety of short-term and long-term in-situ tests. Testing methods require the installation of a large number of test instruments and sensors with a variety of functions. These instruments produce analog andmore » digital data that must be collected, processed, stored, and evaluated in an attempt to predict performance of the repository. The Integrated Data and Control System (IDCS) is envisioned as a distributed data acquisition that electronically acquires and stores data from these test instruments. IDCS designers are responsible for designing and overseeing the procurement of the system, IDCS Operation and Maintenance operates and maintains the installed system, and the IDCS Data Manager is responsible for distribution of IDCS data to participants. This report is a compilation of trip reports, interoffice memos, and other memos relevant to Computer Applications Group, Inc., work on this project.« less

  20. Characterizing user requirements for future land observing satellites

    NASA Technical Reports Server (NTRS)

    Barker, J. L.; Cressy, P. J.; Schnetzler, C. C.; Salomonson, V. V.

    1981-01-01

    The objective procedure was developed for identifying probable sensor and mission characteristics for an operational satellite land observing system. Requirements were systematically compiled, quantified and scored by type of use, from surveys of federal, state, local and private communities. Incremental percent increases in expected value of data were estimated for critical system improvements. Comparisons with costs permitted selection of a probable sensor system, from a set of 11 options, with the following characteristics: 30 meter spatial resolution in 5 bands and 15 meters in 1 band, spectral bands nominally at Thematic Mapper (TM) bands 1 through 6 positions, and 2 day data turn around for receipt of imagery. Improvements are suggested for both the form of questions and the procedures for analysis of future surveys in order to provide a more quantitatively precise definition of sensor and mission requirements.

  1. Using XML Configuration-Driven Development to Create a Customizable Ground Data System

    NASA Technical Reports Server (NTRS)

    Nash, Brent; DeMore, Martha

    2009-01-01

    The Mission data Processing and Control Subsystem (MPCS) is being developed as a multi-mission Ground Data System with the Mars Science Laboratory (MSL) as the first fully supported mission. MPCS is a fully featured, Java-based Ground Data System (GDS) for telecommand and telemetry processing based on Configuration-Driven Development (CDD). The eXtensible Markup Language (XML) is the ideal language for CDD because it is easily readable and editable by all levels of users and is also backed by a World Wide Web Consortium (W3C) standard and numerous powerful processing tools that make it uniquely flexible. The CDD approach adopted by MPCS minimizes changes to compiled code by using XML to create a series of configuration files that provide both coarse and fine grained control over all aspects of GDS operation.

  2. The VATES-Diamond as a Verifier's Best Friend

    NASA Astrophysics Data System (ADS)

    Glesner, Sabine; Bartels, Björn; Göthel, Thomas; Kleine, Moritz

    Within a model-based software engineering process it needs to be ensured that properties of abstract specifications are preserved by transformations down to executable code. This is even more important in the area of safety-critical real-time systems where additionally non-functional properties are crucial. In the VATES project, we develop formal methods for the construction and verification of embedded systems. We follow a novel approach that allows us to formally relate abstract process algebraic specifications to their implementation in a compiler intermediate representation. The idea is to extract a low-level process algebraic description from the intermediate code and to formally relate it to previously developed abstract specifications. We apply this approach to a case study from the area of real-time operating systems and show that this approach has the potential to seamlessly integrate modeling, implementation, transformation and verification stages of embedded system development.

  3. Molecular implementation of simple logic programs.

    PubMed

    Ran, Tom; Kaplan, Shai; Shapiro, Ehud

    2009-10-01

    Autonomous programmable computing devices made of biomolecules could interact with a biological environment and be used in future biological and medical applications. Biomolecular implementations of finite automata and logic gates have already been developed. Here, we report an autonomous programmable molecular system based on the manipulation of DNA strands that is capable of performing simple logical deductions. Using molecular representations of facts such as Man(Socrates) and rules such as Mortal(X) <-- Man(X) (Every Man is Mortal), the system can answer molecular queries such as Mortal(Socrates)? (Is Socrates Mortal?) and Mortal(X)? (Who is Mortal?). This biomolecular computing system compares favourably with previous approaches in terms of expressive power, performance and precision. A compiler translates facts, rules and queries into their molecular representations and subsequently operates a robotic system that assembles the logical deductions and delivers the result. This prototype is the first simple programming language with a molecular-scale implementation.

  4. A materials accounting system for an IBM PC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearse, R.C.; Thomas, R.J.; Henslee, S.P.

    1986-01-01

    The authors have adapted the Los Alamos MASS accounting system for use on an IBM PC/AT at the Fuels Manufacturing Facility (FMF) at Argonne National Laboratory West (ANL-WEST). Cost of hardware and proprietary software was less than $10,000 per station. The system consists of three stations between which accounting information is transferred using floppy disks accompanying special nuclear material shipments. The programs were implemented in dBASEIII and were compiled using the proprietary software CLIPPER. Modifications to the inventory can be posted in just a few minutes, and operator/computer interaction is nearly instantaneous. After the records are built by the user,more » it takes 4-5 seconds to post the results to the database files. A version of this system was specially adapted and is currently in use at the FMF facility at Argonne National Laboratory. Initial satisfaction is adequate and software and hardware problems are minimal.« less

  5. i-SAIRAS '90; Proceedings of the International Symposium on Artificial Intelligence, Robotics and Automation in Space, Kobe, Japan, Nov. 18-20, 1990

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The present conference on artificial intelligence (AI), robotics, and automation in space encompasses robot systems, lunar and planetary robots, advanced processing, expert systems, knowledge bases, issues of operation and management, manipulator control, and on-orbit service. Specific issues addressed include fundamental research in AI at NASA, the FTS dexterous telerobot, a target-capture experiment by a free-flying robot, the NASA Planetary Rover Program, the Katydid system for compiling KEE applications to Ada, and speech recognition for robots. Also addressed are a knowledge base for real-time diagnosis, a pilot-in-the-loop simulation of an orbital docking maneuver, intelligent perturbation algorithms for space scheduling optimization, a fuzzy control method for a space manipulator system, hyperredundant manipulator applications, robotic servicing of EOS instruments, and a summary of astronaut inputs on automation and robotics for the Space Station Freedom.

  6. An expert system for prediction of aquatic toxicity of contaminants

    USGS Publications Warehouse

    Hickey, James P.; Aldridge, Andrew J.; Passino, Dora R. May; Frank, Anthony M.; Hushon, Judith M.

    1990-01-01

    The National Fisheries Research Center-Great Lakes has developed an interactive computer program in muLISP that runs on an IBM-compatible microcomputer and uses a linear solvation energy relationship (LSER) to predict acute toxicity to four representative aquatic species from the detailed structure of an organic molecule. Using the SMILES formalism for a chemical structure, the expert system identifies all structural components and uses a knowledge base of rules based on an LSER to generate four structure-related parameter values. A separate module then relates these values to toxicity. The system is designed for rapid screening of potential chemical hazards before laboratory or field investigations are conducted and can be operated by users with little toxicological background. This is the first expert system based on LSER, relying on the first comprehensive compilation of rules and values for the estimation of LSER parameters.

  7. Model compilation for real-time planning and diagnosis with feedback

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2005-01-01

    This paper describes MEXEC, an implemented micro executive that compiles a device model that can have feedback into a structure for subsequent evaluation. This system computes both the most likely current device mode from n sets of sensor measurements and the n-1 step reconfiguration plan that is most likely to result in reaching a target mode - if such a plan exists. A user tunes the system by increasing n to improve system capability at the cost of real-time performance.

  8. Scalable and Accurate SMT-Based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-31

    accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...be accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of Linux, Mac OS

  9. Program structure-based blocking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.

    2017-09-26

    Embodiments relate to program structure-based blocking. An aspect includes receiving source code corresponding to a computer program by a compiler of a computer system. Another aspect includes determining a prefetching section in the source code by a marking module of the compiler. Yet another aspect includes performing, by a blocking module of the compiler, blocking of instructions located in the prefetching section into instruction blocks, such that the instruction blocks of the prefetching section only contain instructions that are located in the prefetching section.

  10. A Technique for Removing an Important Class of Trojan Horses from High-Order Languages

    DTIC Science & Technology

    1988-01-01

    A Technique for Removing an Important Class of Trojan Horses from High Order Languages∗ John McDermott Center for Secure Information Technology...Ken Thompson described a sophisticated Trojan horse attack on a compiler, one that is undetectable by any search of the compiler source code. The...object of the compiler Trojan horse is to modify the semantics of the high order language in a way that breaks the security of a trusted system generated

  11. Store operations to maintain cache coherence

    DOEpatents

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  12. SUMMARY OF ELECTRIC SERVICE COSTS FOR TOTALLY AIR CONDITIONED SCHOOLS PREPARED FOR HOUSTON INDEPENDENT SCHOOL DISTRICT, MAY 31, 1967.

    ERIC Educational Resources Information Center

    WHITESIDES, M.M.

    THIS REPORT IS A COMPILATION OF DATA ON ELECTRIC AIR CONDITIONING COSTS, OPERATIONS AND MAINTENANCE. AIR CONDITIONING UNITS ARE COMPARED IN TERMS OF ELECTRIC VERSUS NON-ELECTRIC, AUTOMATIC VERSUS OPERATED, AIR COOLED VERSUS WATER COOLED, RECIPROCATING VERSUS CENTRIFUGAL COMPRESSORS, SPACE AND NOISE, REHEAT, MAINTENANCE AND ORIGINAL COST. DATA ARE…

  13. Continuous-time quantum Monte Carlo impurity solvers

    NASA Astrophysics Data System (ADS)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.

  14. On search guide phrase compilation for recommending home medical products.

    PubMed

    Luo, Gang

    2010-01-01

    To help people find desired home medical products (HMPs), we developed an intelligent personal health record (iPHR) system that can automatically recommend HMPs based on users' health issues. Using nursing knowledge, we pre-compile a set of "search guide" phrases that provides semantic translation from words describing health issues to their underlying medical meanings. Then iPHR automatically generates queries from those phrases and uses them and a search engine to retrieve HMPs. To avoid missing relevant HMPs during retrieval, the compiled search guide phrases need to be comprehensive. Such compilation is a challenging task because nursing knowledge updates frequently and contains numerous details scattered in many sources. This paper presents a semi-automatic tool facilitating such compilation. Our idea is to formulate the phrase compilation task as a multi-label classification problem. For each newly obtained search guide phrase, we first use nursing knowledge and information retrieval techniques to identify a small set of potentially relevant classes with corresponding hints. Then a nurse makes the final decision on assigning this phrase to proper classes based on those hints. We demonstrate the effectiveness of our techniques by compiling search guide phrases from an occupational therapy textbook.

  15. Powertrain Test Procedure Development for EPA GHG Certification of Medium- and Heavy-Duty Engines and Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chambon, Paul H.; Deter, Dean D.

    2016-07-01

    xiii ABSTRACT The goal of this project is to develop and evaluate powertrain test procedures that can accurately simulate real-world operating conditions, and to determine greenhouse gas (GHG) emissions of advanced medium- and heavy-duty engine and vehicle technologies. ORNL used their Vehicle System Integration Laboratory to evaluate test procedures on a stand-alone engine as well as two powertrains. Those components where subjected to various drive cycles and vehicle conditions to evaluate the validity of the results over a broad range of test conditions. Overall, more than 1000 tests were performed. The data are compiled and analyzed in this report.

  16. Analysis of wind profile measurements from an instrumented aircraft

    NASA Technical Reports Server (NTRS)

    Paige, Terry S.; Murphy, Patrick J.

    1990-01-01

    The results of an experimental program to determine the capability of measuring wind profiles on support of STS operations with an instrumented aircraft are discussed. These results are a compilation of the flight experiments and the statistical data comparing the quality of the aircraft measurements with quasi-simultaneous and quasi-spatial overlapping Jimsphere measurements. An instrumented aircraft was chosen as a potential alternative to the Jimsphere/radar system for expediting the wind profile calculation by virtue of the ability of an aircraft to traverse the altitudes of interest in roughly 10 minutes. The two aircraft which participated in the study were F-104 and ER-2.

  17. ART/Ada design project, phase 1. Task 3 report: Test plan

    NASA Technical Reports Server (NTRS)

    Allen, Bradley P.

    1988-01-01

    The plan is described for the integrated testing and benchmark of Phase Ada based ESBT Design Research Project. The integration testing is divided into two phases: (1) the modules that do not rely on the Ada code generated by the Ada Generator are tested before the Ada Generator is implemented; and (2) all modules are integrated and tested with the Ada code generated by the Ada Generator. Its performance and size as well as its functionality is verified in this phase. The target platform is a DEC Ada compiler on VAX mini-computers and VAX stations running the VMS operating system.

  18. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  19. COMPILATION OF CONVERSION COEFFICIENTS FOR THE DOSE TO THE LENS OF THE EYE

    PubMed Central

    2017-01-01

    Abstract A compilation of fluence-to-absorbed dose conversion coefficients for the dose to the lens of the eye is presented. The compilation consists of both previously published data and newly calculated values: photon data (5 keV–50 MeV for both kerma approximation and full electron transport), electron data (10 keV–50 MeV), and positron data (1 keV–50 MeV) – neutron data will be published separately. Values are given for angles of incidence from 0° up to 90° in steps of 15° and for rotational irradiation. The data presented can be downloaded from this article's website and they are ready for use by Report Committee (RC) 26. This committee has been set up by the International Commission on Radiation Units and Measurements (ICRU) and is working on a ‘proposal for a redefinition of the operational quantities for external radiation exposure’. PMID:27542816

  20. Hazardous chemical tracking system (HAZ-TRAC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramlette, J D; Ewart, S M; Jones, C E

    Westinghouse Idaho Nuclear Company, Inc. (WINCO) developed and implemented a computerized hazardous chemical tracking system, referred to as Haz-Trac, for use at the Idaho Chemical Processing Plant (ICPP). Haz-Trac is designed to provide a means to improve the accuracy and reliability of chemical information, which enhances the overall quality and safety of ICPP operations. The system tracks all chemicals and chemical components from the time they enter the ICPP until the chemical changes form, is used, or becomes a waste. The system runs on a Hewlett-Packard (HP) 3000 Series 70 computer. The system is written in COBOL and uses VIEW/3000,more » TurboIMAGE/DBMS 3000, OMNIDEX, and SPEEDWARE. The HP 3000 may be accessed throughout the ICPP, and from remote locations, using data communication lines. Haz-Trac went into production in October, 1989. Currently, over 1910 chemicals and chemical components are tracked on the system. More than 2500 personnel hours were saved during the first six months of operation. Cost savings have been realized by reducing the time needed to collect and compile reporting information, identifying and disposing of unneeded chemicals, and eliminating duplicate inventories. Haz-Trac maintains information required by the Superfund Amendment Reauthorization Act (SARA), the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) and the Occupational Safety and Health Administration (OSHA).« less

  1. The 50 Constellation Priority Sites

    NASA Technical Reports Server (NTRS)

    Noble, S.; Joosten, K.; Eppler, D.; Gruener, J.; Mendell, W.; French, R.; Plescia, J.; Spudis, P.; Wargo, M.; Robinson, M.; hide

    2009-01-01

    The Constellation program (CxP) has developed a list of 50 sites of interest on the Moon which will be targeted by the LRO narrow angle camera. The list has also been provided to the M team to supplement their targeting list. This list does not represent a "site selection" process; rather the goal was to find "representative" sites and terrains to understand the range of possible surface conditions for human lunar exploration to aid engineering design and operational planning. The list compilers leveraged heavily on past site selection work (e.g. Geoscience and a Lunar Base Workshop - 1988, Site Selection Strategy for a Lunar Outpost - 1990, Exploration Systems Architecture Study (ESAS) - 2005). Considerations included scientific, resource utilization, and operational merits, and a desire to span lunar terrain types. The targets have been organized into two "tiers" of 25 sites each to provide a relative priority ranking in the event of mutual interference. A LEAG SAT (special action team) was established to validate and recommend modifications to the list. This SAT was chaired by Dr. Paul Lucey. They provided their final results to CxP in May. Dr. Wendell Mendell will organize an on-going analysis of the data as they come down to ensure data quality and determine if and when a site has sufficient data to be retired from the list. The list was compiled using the best available data, however, it is understood that with the flood of new lunar data, minor modifications or adjustments may be required.

  2. Ada compiler validation summary report. Certificate number: 891116W1. 10191. Intel Corporation, IPSC/2 Ada, Release 1. 1, IPSC/2 parallel supercomputer, system resource manager host and IPSC/2 parallel supercomputer, CX-1 nodes target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1989-11-16

    This VSR documents the results of the validation testing performed on an Ada compiler. Testing was carried out for the following purposes: To attempt to identify any language constructs supported by the compiler that do not conform to the Ada Standard; To attempt to identify any language constructs not supported by the compiler but required by the Ada Standard; and To determine that the implementation-dependent behavior is allowed by the Ada Standard. Testing of this compiler was conducted by SofTech, Inc. under the direction of he AVF according to procedures established by the Ada Joint Program Office and administered bymore » the Ada Validation Organization (AVO). On-side testing was completed 16 November 1989 at Aloha OR.« less

  3. Use of major surgery in south India: A retrospective audit of hospital claim data from a large, community health insurance program.

    PubMed

    Shaikh, Maaz; Woodward, Mark; Rahimi, Kazem; Patel, Anushka; Rath, Santosh; MacMahon, Stephen; Jha, Vivekanand

    2015-05-01

    Information on the use of major surgery in India is scarce. In this study we aimed to bridge this gap by auditing hospital claims from Rajiv Aarogyasri Community Health Insurance Scheme, which provides access to free hospital care through state-funded insurance to 68 million beneficiaries, an estimated 81% of population in the states of Telangana and Andhra Pradesh. Publicly available deidentified hospital claim data for all surgery procedures conducted between mid-2008 and mid-2012 were compiled across all 23 districts in Telangana and Andhra Pradesh. A total of 677,332 operative admissions (80% at private hospitals) were recorded at an annual rate of 259 per 100,000 beneficiaries, with male subjects accounting for 56% of admissions. Injury was the most common cause for operative admission (27%) with operative correction of long bone fractures being the most common procedure (20%) identified in the audit. Diseases of the digestive (16%), genitourinary (12%), and musculoskeletal (10%) systems were other leading causes for operative admissions. Most hospital bed-days were used by admissions for injuries (31%) and diseases of the digestive (17%) and musculoskeletal system (11%) costing 19%, 13%, and 11% of reimbursement. Operations on the circulatory system (8%) accounted for 21% of reimbursements. Annual per capita cost of operative claims was US$1.48. The use of surgery by an insured population in India continued to be low despite access to financing comparable with greater spending countries, highlighting need for strategies, beyond traditional health financing, that prioritize improvement in access, delivery, and use of operative care. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. 76 FR 4703 - Statement of Organization, Functions, and Delegations of Authority

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ... regarding medical loss ratio standards and the insurance premium rate review process, and issues premium... Oriented Plan program. Collects, compiles and maintains comparative pricing data for an Internet portal... benefit from the new health insurance system. Collects, compiles and maintains comparative pricing data...

  5. Computer program for the IBM personal computer which searches for approximate matches to short oligonucleotide sequences in long target DNA sequences.

    PubMed Central

    Myers, E W; Mount, D W

    1986-01-01

    We describe a program which may be used to find approximate matches to a short predefined DNA sequence in a larger target DNA sequence. The program predicts the usefulness of specific DNA probes and sequencing primers and finds nearly identical sequences that might represent the same regulatory signal. The program is written in the C programming language and will run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The program has been integrated into an existing software package for the IBM personal computer (see article by Mount and Conrad, this volume). Some examples of its use are given. PMID:3753785

  6. Structural integrity of wind tunnel wooden fan blades

    NASA Technical Reports Server (NTRS)

    Young, Clarence P., Jr.; Wingate, Robert T.; Rooker, James R.; Mort, Kenneth W.; Zager, Harold E.

    1991-01-01

    Information is presented which was compiled by the NASA Inter-Center Committee on Structural Integrity of Wooden Fan Blades and is intended for use as a guide in design, fabrication, evaluation, and assurance of fan systems using wooden blades. A risk assessment approach for existing NASA wind tunnels with wooden fan blades is provided. Also, state of the art information is provided for wooden fan blade design, drive system considerations, inspection and monitoring methods, and fan blade repair. Proposed research and development activities are discussed, and recommendations are provided which are aimed at future wooden fan blade design activities and safely maintaining existing NASA wind tunnel fan blades. Information is presented that will be of value to wooden fan blade designers, fabricators, inspectors, and wind tunnel operations personnel.

  7. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  8. The jABC Approach to Rigorous Collaborative Development of SCM Applications

    NASA Astrophysics Data System (ADS)

    Hörmann, Martina; Margaria, Tiziana; Mender, Thomas; Nagel, Ralf; Steffen, Bernhard; Trinh, Hong

    Our approach to the model-driven collaborative design of IKEA's P3 Delivery Management Process uses the jABC [9] for model driven mediation and choreography to complement a RUP-based (Rational Unified Process) development process. jABC is a framework for service development based on Lightweight Process Coordination. Users (product developers and system/software designers) easily develop services and applications by composing reusable building-blocks into (flow-) graph structures that can be animated, analyzed, simulated, verified, executed, and compiled. This way of handling the collaborative design of complex embedded systems has proven to be effective and adequate for the cooperation of non-programmers and non-technical people, which is the focus of this contribution, and it is now being rolled out in the operative practice.

  9. Compilation of Abstracts of Theses Submitted by Candidates for Degrees.

    DTIC Science & Technology

    1984-06-01

    Management System for the TI - 59 Programmable Calculator Kersh, T. B. Signal Processor Interface 65 CPT, USA Simulation of the AN/SPY-lA Radar...DESIGN AND IMPLEMENTATION OF A BASIC CROSS-COMPILER AND VIRTUAL MEMORY MANAGEMENT SYSTEM FOR THE TI - 59 PROGRAMMABLE CALCULATOR Mark R. Kindl Captain...Academy, 1974 The instruction set of the TI - 59 Programmable Calculator bears a close similarity to that of an assembler. Though most of the calculator

  10. Thermodynamic data for modeling acid mine drainage problems: compilation and estimation of data for selected soluble iron-sulfate minerals

    USGS Publications Warehouse

    Hemingway, Bruch S.; Seal, Robert R.; Chou, I-Ming

    2002-01-01

    Enthalpy of formation, Gibbs energy of formation, and entropy values have been compiled from the literature for the hydrated ferrous sulfate minerals melanterite, rozenite, and szomolnokite, and a variety of other hydrated sulfate compounds. On the basis of this compilation, it appears that there is no evidence for an excess enthalpy of mixing for sulfate-H2O systems, except for the first H2O molecule of crystallization. The enthalpy and Gibbs energy of formation of each H2O molecule of crystallization, except the first, in the iron(II) sulfate - H2O system is -295.15 and -238.0 kJ?mol-1, respectively. The absence of an excess enthalpy of mixing is used as the basis for estimating thermodynamic values for a variety of ferrous, ferric, and mixed-valence sulfate salts of relevance to acid-mine drainage systems.

  11. An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandemir, Mahmut Taylan; Choudary, Alok; Thakur, Rajeev

    In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final reportmore » summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.« less

  12. A ROSE-based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D; Panas, T

    2010-01-25

    OpenMP is a popular and evolving programming model for shared-memory platforms. It relies on compilers for optimal performance and to target modern hardware architectures. A variety of extensible and robust research compilers are key to OpenMP's sustainable success in the future. In this paper, we present our efforts to build an OpenMP 3.0 research compiler for C, C++, and Fortran; using the ROSE source-to-source compiler framework. Our goal is to support OpenMP research for ourselves and others. We have extended ROSE's internal representation to handle all of the OpenMP 3.0 constructs and facilitate their manipulation. Since OpenMP research is oftenmore » complicated by the tight coupling of the compiler translations and the runtime system, we present a set of rules to define a common OpenMP runtime library (XOMP) on top of multiple runtime libraries. These rules additionally define how to build a set of translations targeting XOMP. Our work demonstrates how to reuse OpenMP translations across different runtime libraries. This work simplifies OpenMP research by decoupling the problematic dependence between the compiler translations and the runtime libraries. We present an evaluation of our work by demonstrating an analysis tool for OpenMP correctness. We also show how XOMP can be defined using both GOMP and Omni and present comparative performance results against other OpenMP compilers.« less

  13. An evaluation of SAO sites for laser operations

    NASA Technical Reports Server (NTRS)

    Thorp, J. M.; Bush, M. A.; Pearlman, M. R.

    1974-01-01

    Operational criteria are provided for the selection of laser tracking sites for the Earth and Ocean Physics Applications Program. A compilation of data is given concerning the effect of weather conditions on laser and Baker-Nunn camera operations. These data have been gathered from the Smithsonian astrophysical observing station sites occupied since the inception of the satellite tracking program. Also given is a brief description of each site, including its characteristic weather conditions, comments on communications and logistics, and a summary of the terms of agreement under which the station is or was operated.

  14. How to Build MCNP 6.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bull, Jeffrey S.

    This presentation describes how to build MCNP 6.2. MCNP®* 6.2 can be compiled on Macs, PCs, and most Linux systems. It can also be built for parallel execution using both OpenMP and Messing Passing Interface (MPI) methods. MCNP6 requires Fortran, C, and C++ compilers to build the code.

  15. Open source software to control Bioflo bioreactors.

    PubMed

    Burdge, David A; Libourel, Igor G L

    2014-01-01

    Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW.

  16. Open Source Software to Control Bioflo Bioreactors

    PubMed Central

    Burdge, David A.; Libourel, Igor G. L.

    2014-01-01

    Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW. PMID:24667828

  17. CLIPS/Ada: An Ada-based tool for building expert systems

    NASA Technical Reports Server (NTRS)

    White, W. A.

    1990-01-01

    Clips/Ada is a production system language and a development environment. It is functionally equivalent to the CLIPS tool. CLIPS/Ada was developed in order to provide a means of incorporating expert system technology into projects where the use of the Ada language had been mandated. A secondary purpose was to glean information about the Ada language and its compilers. Specifically, whether or not the language and compilers were mature enough to support AI applications. The CLIPS/Ada tool is coded entirely in Ada and is designed to be used by Ada systems that require expert reasoning.

  18. Ada (Trade Name) Compiler Validation Summary Report: IBM Corporation. IBM Development System for the Ada Language System, Version 1.1.0, IBM 4381 under VM/SP CMS, Release 3.6.

    DTIC Science & Technology

    1988-05-19

    System for the Ada Language System, Version 1.1.0, 1.% International Business Machines Corporation, Wright-Patterson AFB. IBM 4381 under VM/SP CMS...THIS PAGE (When Data Enre’ed) AVF Control Number: AVF-VSR-82.1087 87-03-10-TEL ! Ada® COMPILER VALIDATION SUMMARY REPORT: International Business Machines...Organization (AVO). On-site testing was conducted from !8 May 1987 through 19 May 1987 at International Business Machines -orporation, San Diego CA. 1.2

  19. 1978 Status Report on Aviation and Space Related High School Courses

    ERIC Educational Resources Information Center

    Journal of Aerospace Education, 1978

    1978-01-01

    Presents a national compilation of statistical data pertaining to secondary level aviation and aerospace education for the 1977-78 school year. Data include trends and patterns of course structure, design, and operation in table form. (SL)

  20. American transit safety award : award winning safety program

    DOT National Transportation Integrated Search

    1950-01-01

    Prepared ca. 1950. As the result of the widespread interest in safety evident among companies at meetings of the ATA Small Operations Division, the Division's Administrative Committee considered it desirable to put together a compilation of safety pr...

  1. High efficiency motor selection handbook

    NASA Astrophysics Data System (ADS)

    McCoy, Gilbert A.; Litman, Todd; Douglass, John G.

    1990-10-01

    Substantial reductions in energy and operational costs can be achieved through the use of energy-efficient electric motors. A handbook was compiled to help industry identify opportunities for cost-effective application of these motors. It covers the economic and operational factors to be considered when motor purchase decisions are being made. Its audience includes plant managers, plant engineers, and others interested in energy management or preventative maintenance programs.

  2. Comparative Unit Cost and Wage Rate Report on Maintenance and Operation of Physical Plants of Universities and Colleges.

    ERIC Educational Resources Information Center

    Association of Physical Plant Administrators of Universities and Colleges, Washington, DC.

    This report presents the results of a questionnaire from 161 members of the Association of Physical Plant Administrators of Universities and Colleges (APPA). The purpose of this report is to compile and present comparative cost data for the maintenance and operation of physical plants of universities and colleges for the fiscal year 1972-73. The…

  3. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  4. Compilation of 1989 annual reports of the Navy ELF Communications System Ecological Monitoring Program. Volume 2. tabs C-F. Annual progress report, Jan-Dec 89

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-08-01

    This is the eighth compilation of annual reports for the Navy's ELF Communications Systems Ecological Monitoring Program. The reports document the progress of eight studies performed during 1989 near the Naval Radio Transmitting Facility -- Republic, Michigan. The purpose of the monitoring is to determine whether electromagnetic fields produced by the ELF Communications System will affect resident biota or their ecological relationships. Soil Amoeba: Arthropoda and Earthworms: Pollinating Insects: Small Mammals and Nesting Birds.

  5. A Cost Analysis of a Community Health Worker Program in Rural Vermont

    PubMed Central

    Wang, Guijing; Ruggles, Laural; Dunet, Diane O.

    2015-01-01

    Studies have shown that community health workers (CHWs) can improve the effectiveness of health care systems; however, little has been reported about CHW program costs. We examined the costs of a program staffed by three CHWs associated with a small, rural hospital in Vermont. We used a standardized data collection tool to compile cost information from administrative data and personal interviews. We analyzed personnel and operational costs from October 2010 to September 2011. The estimated total program cost was $420,348, a figure comprised of $281,063 (67 %) for personnel and $139,285 (33 %) for operations. CHW salaries and office space were the major cost components. Our cost analysis approach may be adapted by others to conduct cost analyses of their CHW program. Our cost estimates can help inform future economic studies of CHW programs and resource allocation decisions. PMID:23794072

  6. C++QEDv2 Milestone 10: A C++/Python application-programming framework for simulating open quantum dynamics

    NASA Astrophysics Data System (ADS)

    Sandner, Raimar; Vukics, András

    2014-09-01

    The v2 Milestone 10 release of C++QED is primarily a feature release, which also corrects some problems of the previous release, especially as regards the build system. The adoption of C++11 features has led to many simplifications in the codebase. A full doxygen-based API manual [1] is now provided together with updated user guides. A largely automated, versatile new testsuite directed both towards computational and physics features allows for quickly spotting arising errors. The states of trajectories are now savable and recoverable with full binary precision, allowing for trajectory continuation regardless of evolution method (single/ensemble Monte Carlo wave-function or Master equation trajectory). As the main new feature, the framework now presents Python bindings to the highest-level programming interface, so that actual simulations for given composite quantum systems can now be performed from Python. Catalogue identifier: AELU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 492422 No. of bytes in distributed program, including test data, etc.: 8070987 Distribution format: tar.gz Programming language: C++/Python. Computer: i386-i686, x86 64. Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2. External routines: Boost C++ libraries, GNU Scientific Library, Blitz++, FLENS, NumPy, SciPy Catalogue identifier of previous version: AELU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1381 Does the new version supersede the previous version?: Yes Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [2,3]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [4] and Monte Carlo wave-function simulation [5]. Solution method: Master equation, Monte Carlo wave-function method Reasons for new version: The new version is mainly a feature release, but it does correct some problems of the previous version, especially as regards the build system. Summary of revisions: We give an example for a typical Python script implementing the ring-cavity system presented in Sec. 3.3 of Ref. [2]: Restrictions: Total dimensionality of the system. Master equation-few thousands. Monte Carlo wave-function trajectory-several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. We use several C++11 features which limits the range of supported compilers (g++ 4.7, clang++ 3.1) Documentation, http://cppqed.sourceforge.net/ Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks. References: [1] Entry point: http://cppqed.sf.net [2] A. Vukics, C++QEDv2: The multi-array concept and compile-time algorithms in the definition of composite quantum systems, Comp. Phys. Comm. 183(2012)1381. [3] A. Vukics, H. Ritsch, C++QED: an object-oriented framework for wave-function simulations of cavity QED systems, Eur. Phys. J. D 44 (2007) 585. [4] H. J. Carmichael, An Open Systems Approach to Quantum Optics, Springer, 1993. [5] J. Dalibard, Y. Castin, K. Molmer, Wave-function approach to dissipative processes in quantum optics, Phys. Rev. Lett. 68 (1992) 580.

  7. A comparative study of programming languages for next-generation astrodynamics systems

    NASA Astrophysics Data System (ADS)

    Eichhorn, Helge; Cano, Juan Luis; McLean, Frazer; Anderl, Reiner

    2018-03-01

    Due to the computationally intensive nature of astrodynamics tasks, astrodynamicists have relied on compiled programming languages such as Fortran for the development of astrodynamics software. Interpreted languages such as Python, on the other hand, offer higher flexibility and development speed thereby increasing the productivity of the programmer. While interpreted languages are generally slower than compiled languages, recent developments such as just-in-time (JIT) compilers or transpilers have been able to close this speed gap significantly. Another important factor for the usefulness of a programming language is its wider ecosystem which consists of the available open-source packages and development tools such as integrated development environments or debuggers. This study compares three compiled languages and three interpreted languages, which were selected based on their popularity within the scientific programming community and technical merit. The three compiled candidate languages are Fortran, C++, and Java. Python, Matlab, and Julia were selected as the interpreted candidate languages. All six languages are assessed and compared to each other based on their features, performance, and ease-of-use through the implementation of idiomatic solutions to classical astrodynamics problems. We show that compiled languages still provide the best performance for astrodynamics applications, but JIT-compiled dynamic languages have reached a competitive level of speed and offer an attractive compromise between numerical performance and programmer productivity.

  8. The HACMS program: using formal methods to eliminate exploitable bugs

    PubMed Central

    Launchbury, John; Richards, Raymond

    2017-01-01

    For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA’s HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles. This article is part of the themed issue ‘Verified trustworthy software systems’. PMID:28871050

  9. Modified Laser and Thermos cell calculations on microcomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.

    1987-01-01

    In the course of designing and operating nuclear reactors, many fuel pin cell calculations are required to obtain homogenized cell cross sections as a function of burnup. In the interest of convenience and cost, it would be very desirable to be able to make such calculations on microcomputers. In addition, such a microcomputer code would be very helpful for educational course work in reactor computations. To establish the feasibility of making detailed cell calculations on a microcomputer, a mainframe cell code was compiled and run on a microcomputer. The computer code Laser, originally written in Fortran IV for the IBM-7090more » class of mainframe computers, is a cylindrical, one-dimensional, multigroup lattice cell program that includes burnup. It is based on the MUFT code for epithermal and fast group calculations, and Thermos for the thermal calculations. There are 50 fast and epithermal groups and 35 thermal groups. Resonances are calculated assuming a homogeneous system and then corrected for self-shielding, Dancoff, and Doppler by self-shielding factors. The Laser code was converted to run on a microcomputer. In addition, the Thermos portion of Laser was extracted and compiled separately to have available a stand alone thermal code.« less

  10. Engineering Amorphous Systems, Using Global-to-Local Compilation

    NASA Astrophysics Data System (ADS)

    Nagpal, Radhika

    Emerging technologies are making it possible to assemble systems that incorporate myriad of information-processing units at almost no cost: smart materials, selfassembling structures, vast sensor networks, pervasive computing. How does one engineer robust and prespecified global behavior from the local interactions of immense numbers of unreliable parts? We discuss organizing principles and programming methodologies that have emerged from Amorphous Computing research, that allow us to compile a specification of global behavior into a robust program for local behavior.

  11. Thermal surveillance of volcanoes of the Cascade Range and Iceland utilizing ERTS DCP systems and imagery

    NASA Technical Reports Server (NTRS)

    Friedman, J. D. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Significant results of the thermal surveillance of volcanoes experiment during 1972 included the design, construction, emplacement, and successful operation at volcanic sites in the Cascade Range, North America and on Surtsey, Iceland, of automated thermistor arrays which transmit ground and fumarole temperatures via the ERTS-1 data communication system to Goddard Space Flight Center. Temperature, radiance, and anomalous heat flow variations are being plotted by a U.S. Geological Survey IBM 360/65 computer program to show daily fluctuations at each of the sites. Results are being compiled in conjunction with NASA and USGS aircraft infrared survey data to provide thermal energy yield estimates during the current repose period of several Cascade Range volcanic systems. ERTS-1 MSS images have provided new information on the extent of structural elements controlling thermal emission at Lassen Volcanic National Park.

  12. HEPLIB `91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  13. HEPLIB 91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  14. Aircraft signal definition for flight safety system monitoring system

    NASA Technical Reports Server (NTRS)

    Gibbs, Michael (Inventor); Omen, Debi Van (Inventor)

    2003-01-01

    A system and method compares combinations of vehicle variable values against known combinations of potentially dangerous vehicle input signal values. Alarms and error messages are selectively generated based on such comparisons. An aircraft signal definition is provided to enable definition and monitoring of sets of aircraft input signals to customize such signals for different aircraft. The input signals are compared against known combinations of potentially dangerous values by operational software and hardware of a monitoring function. The aircraft signal definition is created using a text editor or custom application. A compiler receives the aircraft signal definition to generate a binary file that comprises the definition of all the input signals used by the monitoring function. The binary file also contains logic that specifies how the inputs are to be interpreted. The file is then loaded into the monitor function, where it is validated and used to continuously monitor the condition of the aircraft.

  15. ESO telbib: Linking In and Reaching Out

    NASA Astrophysics Data System (ADS)

    Grothkopf, U.; Meakins, S.

    2015-04-01

    Measuring an observatory's research output is an integral part of its science operations. Like many other observatories, ESO tracks scholarly papers that use observational data from ESO facilities and uses state-of-the-art tools to create, maintain, and further develop the Telescope Bibliography database (telbib). While telbib started out as a stand-alone tool mostly used to compile lists of papers, it has by now developed into a multi-faceted, interlinked system. The core of the telbib database is links between scientific papers and observational data generated by the La Silla Paranal Observatory residing in the ESO archive. This functionality has also been deployed for ALMA data. In addition, telbib reaches out to several other systems, including ESO press releases, the NASA ADS Abstract Service, databases at the CDS Strasbourg, and impact scores at Altmetric.com. We illustrate these features to show how the interconnected telbib system enhances the content of the database as well as the user experience.

  16. Operator procedure verification with a rapidly reconfigurable simulator

    NASA Technical Reports Server (NTRS)

    Iwasaki, Yumi; Engelmore, Robert; Fehr, Gary; Fikes, Richard

    1994-01-01

    Generating and testing procedures for controlling spacecraft subsystems composed of electro-mechanical and computationally realized elements has become a very difficult task. Before a spacecraft can be flown, mission controllers must envision a great variety of situations the flight crew may encounter during a mission and carefully construct procedures for operating the spacecraft in each possible situation. If, despite extensive pre-compilation of control procedures, an unforeseen situation arises during a mission, the mission controller must generate a new procedure for the flight crew in a limited amount of time. In such situations, the mission controller cannot systematically consider and test alternative procedures against models of the system being controlled, because the available simulator is too large and complex to reconfigure, run, and analyze quickly. A rapidly reconfigurable simulation environment that can execute a control procedure and show its effects on system behavior would greatly facilitate generation and testing of control procedures both before and during a mission. The How Things Work project at Stanford University has developed a system called DME (Device Modeling Environment) for modeling and simulating the behavior of electromechanical devices. DME was designed to facilitate model formulation and behavior simulation of device behavior including both continuous and discrete phenomena. We are currently extending DME for use in testing operator procedures, and we have built a knowledge base for modeling the Reaction Control System (RCS) of the space shuttle as a testbed. We believe that DME can facilitate design of operator procedures by providing mission controllers with a simulation environment that meets all these requirements.

  17. Fiber optic video monitoring system for remote CT/MR scanners clinically accepted

    NASA Astrophysics Data System (ADS)

    Tecotzky, Raymond H.; Bazzill, Todd M.; Eldredge, Sandra L.; Tagawa, James; Sayre, James W.

    1992-07-01

    With the proliferation of CT travel to distant scanners to review images before their patients can be released. We designed a fiber-optic broadband video system to transmit images from seven scanner consoles to fourteen remote monitoring stations in real time. This system has been used clinically by radiologists for over one years. We designed and conducted a user survey to categorize the levels of system use by section (Chest, GI, GU, Bone, Neuro, Peds, etc.), to measure operational utilization and acceptance of the system into the clinical environment, to clarify the system''s importance as a clinical tool for saving radiologists travel-time to distant CT the system''s performance and limitations as a diagnostic tool. The study was administered directly to radiologists using a printed survey form. The results of the survey''s compiled data show a high percentage of system usage by a wide spectrum of radiologists. Clearly, this system has been accepted into the clinical environment as a highly valued diagnostic tool in terms of time savings and functional flexibility.

  18. Energy Use and Other Comparisons Between Diesel and Gasoline Trucks

    DOT National Transportation Integrated Search

    1977-02-01

    This report presents fuel consumption and other data on comparable diesel and gasoline trucks. The data was compiled from actual, operational records of the Maine Department of Transportation for trucks of about 24,000 pounds gross vehicle weight and...

  19. State Governance Action Report, 2007

    ERIC Educational Resources Information Center

    Association of Governing Boards of Universities and Colleges, 2007

    2007-01-01

    This paper presents the State Governance Action Report for 2007. Compiled in this report are state policy developments, including legislation, commissions, and studies, affecting the structure, responsibilities, and operations of public higher education governing boards and institutionally related foundations. Governance and governance-related…

  20. 47 CFR 36.375 - Published directory listing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Published directory listing. 36.375 Section 36... Customer Operations Expenses § 36.375 Published directory listing. (a) This classification includes expenses for preparing or purchasing, compiling and disseminating directory listings. (b) Published...

  1. Functional Requirements Document for HALE UAS Operations in the NAS: Step 1. Version 3

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The purpose of this Functional Requirements Document (FRD) is to compile the functional requirements needed to achieve the Access 5 Vision of "operating High Altitude, Long Endurance (HALE) Unmanned Aircraft Systems (UAS) routinely, safely, and reliably in the national airspace system (NAS)" for Step 1. These functional requirements could support the development of a minimum set of policies, procedures and standards by the Federal Aviation Administration (FAA) and various standards organizations. It is envisioned that this comprehensive body of work will enable the FAA to establish and approve regulations to govern safe operation of UAS in the NAS on a routine or daily "file and fly" basis. The approach used to derive the functional requirements found within this FRD was to decompose the operational requirements and objectives identified within the Access 5 Concept of Operations (CONOPS) into the functions needed to routinely and safely operate a HALE UAS in the NAS. As a result, four major functional areas evolved to enable routine and safe UAS operations for an on-demand basis in the NAS. These four major functions are: Aviate, Navigate, Communicate, and Avoid Hazards. All of the functional requirements within this document can be directly traceable to one of these four major functions. Some functions, however, are traceable to several, or even all, of these four major functions. These cross-cutting functional requirements support the "Command / Control: function as well as the "Manage Contingencies" function. The requirements associated to these high-level functions and all of their supporting low-level functions are addressed in subsequent sections of this document.

  2. Marine Coatings Performance for Different Ship Areas. Volume 1

    DTIC Science & Technology

    1979-07-01

    Operating Service Conditions 2.3.3 Survey of the Major Coating Manufacturers for Coatings Criteria 2.4 Compilation of Service Histories 2.5 Analysis of...Compiled Service Histories 2.5.1 Background Information 2.5.2 Analytical Objective 2.5.3 Comparative Analysis 2.6 Laboratory Tests 2.6.1 Discussion...Service Histories Questionnaire i . . . [11 . . . III iv 1-1 1-1 1-1 1-1 1-2 1-5 1-5 1-5 1-5 2-2 2-2 2-2 2-2 2-5 2-5 2-5 2-5 2-5 2-6 2-7 2-8 2-8 2-8 2-8 2

  3. 48 CFR 2804.902 - Contract information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Contract information. 2804..., and that, to the best of such official's knowledge and belief it is compiled from bureau records... requirements of 26 U.S.C. 6050M and that it is to the best of my knowledge and belief, a compilation of bureau...

  4. 48 CFR 2804.902 - Contract information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Contract information. 2804..., and that, to the best of such official's knowledge and belief it is compiled from bureau records... requirements of 26 U.S.C. 6050M and that it is to the best of my knowledge and belief, a compilation of bureau...

  5. 48 CFR 2804.902 - Contract information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Contract information. 2804..., and that, to the best of such official's knowledge and belief it is compiled from bureau records... requirements of 26 U.S.C. 6050M and that it is to the best of my knowledge and belief, a compilation of bureau...

  6. 48 CFR 2804.902 - Contract information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Contract information. 2804..., and that, to the best of such official's knowledge and belief it is compiled from bureau records... requirements of 26 U.S.C. 6050M and that it is to the best of my knowledge and belief, a compilation of bureau...

  7. Automatic Compilation from High-Level Biologically-Oriented Programming Language to Genetic Regulatory Networks

    PubMed Central

    Beal, Jacob; Lu, Ting; Weiss, Ron

    2011-01-01

    Background The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. Methodology/Principal Findings To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes () and latency of the optimized engineered gene networks. Conclusions/Significance Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems. PMID:21850228

  8. Automatic compilation from high-level biologically-oriented programming language to genetic regulatory networks.

    PubMed

    Beal, Jacob; Lu, Ting; Weiss, Ron

    2011-01-01

    The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes (~ 50%) and latency of the optimized engineered gene networks. Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems.

  9. World commercial aircraft accidents: 1st edition, 1946--1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, C.Y.

    1992-02-01

    This report is a compilation of all accidents world-wide involving aircraft in commercial service which resulted in the loss of the airframe or one or more fatality, or both. This information has been gathered in order to present a complete inventory of commercial aircraft accidents. Events involving military action, sabotage, terrorist bombings, hijackings, suicides, and industrial ground accidents are included within this list. This report is organized into six chapters. The first chapter is the introduction. The second chapter contains the compilation of accidents involving world commercial jet aircraft from 1952 to 1991. The third chapter presents a compilation ofmore » accidents involving world commercial turboprop aircraft from 1952 to 1991. The fourth chapter presents a compilation of accidents involving world commercial pistonprop aircraft with four or more engines from 1946 to 1991. Each accident compilation or database in chapters two, three and four is presented in chronological order. Each accident is presented with information the following categories: date of accident, airline or operator and its flight number (if known), type of flight, type of aircraft and model, aircraft registration number, construction number/manufacturers serial number, aircraft damage resulting from accident, accident flight phase, accident location, number of fatalities, number of occupants, references used to compile the information, and finally cause, remarks, or description (brief) of the accident. The fifth chapter presents a list of all commercial aircraft accidents for all aircraft types with 100 or more fatalities in order of decreasing number of fatalities. Chapter six presents the commercial aircraft accidents for all aircraft types by flight phase. Future editions of this report will have additional follow-on chapters which will present other studies still in preparation at the time this edition was being prepared.« less

  10. Army Energy and Water Reporting System Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deprez, Peggy C.; Giardinelli, Michael J.; Burke, John S.

    There are many areas of desired improvement for the Army Energy and Water Reporting System. The purpose of system is to serve as a data repository for collecting information from energy managers, which is then compiled into an annual energy report. This document summarizes reported shortcomings of the system and provides several alternative approaches for improving application usability and adding functionality. The U.S. Army has been using Army Energy and Water Reporting System (AEWRS) for many years to collect and compile energy data from installations for facilitating compliance with Federal and Department of Defense energy management program reporting requirements. Inmore » this analysis, staff from Pacific Northwest National Laboratory found that substantial opportunities exist to expand AEWRS functions to better assist the Army to effectively manage energy programs. Army leadership must decide if it wants to invest in expanding AEWRS capabilities as a web-based, enterprise-wide tool for improving the Army Energy and Water Management Program or simply maintaining a bottom-up reporting tool. This report looks at both improving system functionality from an operational perspective and increasing user-friendliness, but also as a tool for potential improvements to increase program effectiveness. The authors of this report recommend focusing on making the system easier for energy managers to input accurate data as the top priority for improving AEWRS. The next major focus of improvement would be improved reporting. The AEWRS user interface is dated and not user friendly, and a new system is recommended. While there are relatively minor improvements that could be made to the existing system to make it easier to use, significant improvements will be achieved with a user-friendly interface, new architecture, and a design that permits scalability and reliability. An expanded data set would naturally have need of additional requirements gathering and a focus on integrating with other existing data sources, thus minimizing manually entered data.« less

  11. AUTO_DERIV: Tool for automatic differentiation of a Fortran code

    NASA Astrophysics Data System (ADS)

    Stamatiadis, S.; Farantos, S. C.

    2010-10-01

    AUTO_DERIV is a module comprised of a set of FORTRAN 95 procedures which can be used to calculate the first and second partial derivatives (mixed or not) of any continuous function with many independent variables. The mathematical function should be expressed as one or more FORTRAN 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the FORTRAN 95 language is extensively used to define the differentiation rules. Proper (standard complying) handling of floating-point exceptions is provided by using the IEEE_EXCEPTIONS intrinsic module (Technical Report 15580, incorporated in FORTRAN 2003). New version program summaryProgram title: AUTO_DERIV Catalogue identifier: ADLS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADLS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2963 No. of bytes in distributed program, including test data, etc.: 10 314 Distribution format: tar.gz Programming language: Fortran 95 + (optionally) TR-15580 (Floating-point exception handling) Computer: all platforms with a Fortran 95 compiler Operating system: Linux, Windows, MacOS Classification: 4.12, 6.2 Catalogue identifier of previous version: ADLS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 127 (2000) 343 Does the new version supersede the previous version?: Yes Nature of problem: The need to calculate accurate derivatives of a multivariate function frequently arises in computational physics and chemistry. The most versatile approach to evaluate them by a computer, automatically and to machine precision, is via user-defined types and operator overloading. AUTO_DERIV is a Fortran 95 implementation of them, designed to evaluate the first and second derivatives of a function of many variables. Solution method: The mathematical rules for differentiation of sums, products, quotients, elementary functions in conjunction with the chain rule for compound functions are applied. The function should be expressed as one or more Fortran 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the Fortran 95 language is extensively used to implement the differentiation rules. Reasons for new version: The new version supports Fortran 95, handles properly the floating-point exceptions, and is faster due to internal reorganization. All discovered bugs are fixed. Summary of revisions:The code was rewritten extensively to benefit from features introduced in Fortran 95. Additionally, there was a major internal reorganization of the code, resulting in faster execution. The user interface described in the original paper was not changed. The values that the user must or should specify before compilation (essentially, the number of independent variables) were moved into ad_types module. There were many minor bug fixes. One important bug was found and fixed; the code did not handle correctly the overloading of ∗ in aλ when a=0. The case of division by zero and the discontinuity of the function at the requested point are indicated by standard IEEE exceptions ( IEEE_DIVIDE_BY_ZERO and IEEE_INVALID respectively). If the compiler does not support IEEE exceptions, a module with the appropriate name is provided, imitating the behavior of the 'standard' module in the sense that it raises the corresponding exceptions. It is up to the compiler (through certain flags probably) to detect them. Restrictions: None imposed by the program. There are certain limitations that may appear mostly due to the specific implementation chosen in the user code. They can always be overcome by recoding parts of the routines developed by the user or by modifying AUTO_DERIV according to specific instructions given in [1]. The common restrictions of available memory and the capabilities of the compiler are the same as the original version. Additional comments: The program has been tested using the following compilers: Intel ifort, GNU gfortran, NAGWare f95, g95. Running time: The typical running time for the program depends on the compiler and the complexity of the differentiated function. A rough estimate is that AUTO_DERIV is ten times slower than the evaluation of the analytical ('by hand') function value and derivatives (if they are available). References:S. Stamatiadis, R. Prosmiti, S.C. Farantos, AUTO_DERIV: tool for automatic differentiation of a Fortran code, Comput. Phys. Comm. 127 (2000) 343.

  12. A Software Defined Radio Based Airplane Communication Navigation Simulation System

    NASA Astrophysics Data System (ADS)

    He, L.; Zhong, H. T.; Song, D.

    2018-01-01

    Radio communication and navigation system plays important role in ensuring the safety of civil airplane in flight. Function and performance should be tested before these systems are installed on-board. Conventionally, a set of transmitter and receiver are needed for each system, thus all the equipment occupy a lot of space and are high cost. In this paper, software defined radio technology is applied to design a common hardware communication and navigation ground simulation system, which can host multiple airplane systems with different operating frequency, such as HF, VHF, VOR, ILS, ADF, etc. We use a broadband analog frontend hardware platform, universal software radio peripheral (USRP), to transmit/receive signal of different frequency band. Software is compiled by LabVIEW on computer, which interfaces with USRP through Ethernet, and is responsible for communication and navigation signal processing and system control. An integrated testing system is established to perform functional test and performance verification of the simulation signal, which demonstrate the feasibility of our design. The system is a low-cost and common hardware platform for multiple airplane systems, which provide helpful reference for integrated avionics design.

  13. VPI - VIBRATION PATTERN IMAGER: A CONTROL AND DATA ACQUISITION SYSTEM FOR SCANNING LASER VIBROMETERS

    NASA Technical Reports Server (NTRS)

    Rizzi, S. A.

    1994-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor (Ometron Limited, Kelvin House, Worsley Bridge Road, London, SE26 5BX, England), but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. VPI's graphical user interface allows the operation of the program to be controlled interactively through keyboard and mouse-selected menu options. The main menu controls all functions for setup, data acquisition, display, file operations, and exiting the program. Two types of data may be acquired with the VPI system: single point or "full field". In the single point mode, time series data is sampled by the A/D converter on the I/O board at a user-defined rate for the selected number of samples. The position of the measuring point, adjusted by mirrors in the sensor, is controlled via a mouse input. In the "full field" mode, the measurement point is moved over a user-selected rectangular area with up to 256 positions in both x and y directions. The time series data is sampled by the A/D converter on the I/O board and converted to a root-mean-square (rms) value by the DSP board. The rms "full field" velocity distribution is then uploaded for display and storage. VPI is written in C language and Texas Instruments' TMS320C30 assembly language for IBM PC series and compatible computers running MS-DOS. The program requires 640K of RAM for execution, and a hard disk with 10Mb or more of disk space is recommended. The program also requires a mouse, a VGA graphics display, a Four Channel analog I/O board (Spectrum Signal Processing, Inc.; Westborough, MA), a break-out box and a Spirit-30 board (Sonitech International, Inc.; Wellesley, MA) which includes a TMS320C30 DSP processor, 256Kb zero wait state SRAM, and a daughter board with 8Mb one wait state DRAM. Please contact COSMIC for additional information on required hardware and software. In order to compile the provided VPI source code, a Microsoft C version 6.0 compiler, a Texas Instruments' TMS320C30 assembly language compiler, and the Spirit 30 run time libraries are required. A math co-processor is highly recommended. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. VPI was developed in 1991-1992.

  14. Research and Technology Operating Plan Summary, Fiscal Year 1972 Research and Technology Program

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The NASA Research and Technology program for FY 1972 is presented. It is a compilation of the summary portions of each of the RTOPs (Research and Technology Operating Plan) used for management review and control of research currently in progress throughout NASA. The RTOP Summary is designed to facilitate communication and coordination among concerned technical personnel in government, in industry, and in universities.

  15. The Role of PWC in Declaring a Diver Fit

    DTIC Science & Technology

    2001-06-01

    Conditions [les Questions medicales a caractere oprationel liees aux conditions hypobares ou hyperbares ] To order the complete compilation report...Approved for public release, distribution unlimited This paper is part of the following report: TITLE: Operational Medical Issues in Hypo-and Hyperbaric ...on "Operational Medical Issues in Hypo- and Hyperbaric Conditions", held in Toronto, Canada, 16-19 October 2000, and published in RTO MP-062. 18-2

  16. Mountain-Plains Handbook: The Design and Operation of a Residential Family Based Education Program. Appendix. Supplement Four to Volume Three. Measurement and Evaluation: The Research Services Division.

    ERIC Educational Resources Information Center

    Coyle, David A.; And Others

    One of five supplements which accompany chapter 3 of "Mountain-Plains Handbook: The Design and Operation of a Residential, Family Oriented Career Education Model" (CE 014 630), this document contains a master listing of all Mountain-Plains curriculum, compiled by job title, course, unit, and Learning activity package (LAPS) and arranged…

  17. Research and Technology Operating Plan. Summary: Fiscal year 1976 research and technology program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A compilation of the summary portions of each of the Research and Technology Operating Plans (RTOP) used for management review and control of research currently in progress throughout NASA was presented. The document is arranged in five sections. The first one contains citations and abstracts of the RTOP. This is followed by four indexes: subject, technical monitor, responsible NASA organization, and RTOP number.

  18. Dynamic Modeling and Control of Nuclear Reactors Coupled to Closed-Loop Brayton Cycle Systems using SIMULINK{sup TM}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, Steven A.; Sanchez, Travis

    2005-02-06

    The operation of space reactors for both in-space and planetary operations will require unprecedented levels of autonomy and control. Development of these autonomous control systems will require dynamic system models, effective control methodologies, and autonomous control logic. This paper briefly describes the results of reactor, power-conversion, and control models that are implemented in SIMULINK{sup TM} (Simulink, 2004). SIMULINK{sup TM} is a development environment packaged with MatLab{sup TM} (MatLab, 2004) that allows the creation of dynamic state flow models. Simulation modules for liquid metal, gas cooled reactors, and electrically heated systems have been developed, as have modules for dynamic power-conversion componentsmore » such as, ducting, heat exchangers, turbines, compressors, permanent magnet alternators, and load resistors. Various control modules for the reactor and the power-conversion shaft speed have also been developed and simulated. The modules are compiled into libraries and can be easily connected in different ways to explore the operational space of a number of potential reactor, power-conversion system configurations, and control approaches. The modularity and variability of these SIMULINK{sup TM} models provides a way to simulate a variety of complete power generation systems. To date, both Liquid Metal Reactors (LMR), Gas Cooled Reactors (GCR), and electric heaters that are coupled to gas-dynamics systems and thermoelectric systems have been simulated and are used to understand the behavior of these systems. Current efforts are focused on improving the fidelity of the existing SIMULINK{sup TM} modules, extending them to include isotopic heaters, heat pipes, Stirling engines, and on developing state flow logic to provide intelligent autonomy. The simulation code is called RPC-SIM (Reactor Power and Control-Simulator)« less

  19. A Path to Planetary Protection Requirements for Human Exploration: A Literature Review and Systems Engineering Approach

    NASA Technical Reports Server (NTRS)

    Johnson, James E.; Conley, Cassie; Siegel, Bette

    2015-01-01

    As systems, technologies, and plans for the human exploration of Mars and other destinations beyond low Earth orbit begin to coalesce, it is imperative that frequent and early consideration is given to how planetary protection practices and policy will be upheld. While the development of formal planetary protection requirements for future human space systems and operations may still be a few years from fruition, guidance to appropriately influence mission and system design will be needed soon to avoid costly design and operational changes. The path to constructing such requirements is a journey that espouses key systems engineering practices of understanding shared goals, objectives and concerns, identifying key stakeholders, and iterating a draft requirement set to gain community consensus. This paper traces through each of these practices, beginning with a literature review of nearly three decades of publications addressing planetary protection concerns with respect to human exploration. Key goals, objectives and concerns, particularly with respect to notional requirements, required studies and research, and technology development needs have been compiled and categorized to provide a current 'state of knowledge'. This information, combined with the identification of key stakeholders in upholding planetary protection concerns for human missions, has yielded a draft requirement set that might feed future iteration among space system designers, exploration scientists, and the mission operations community. Combining the information collected with a proposed forward path will hopefully yield a mutually agreeable set of timely, verifiable, and practical requirements for human space exploration that will uphold international commitment to planetary protection.

  20. Computer Language For Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Lucas, Stephen H.

    1991-01-01

    SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.

  1. Automating quantum experiment control

    NASA Astrophysics Data System (ADS)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  2. ROS Hexapod

    NASA Technical Reports Server (NTRS)

    Davis, Kirsch; Bankieris, Derek

    2016-01-01

    As an intern project for NASA Johnson Space Center (JSC), my job was to familiarize myself and operate a Robotics Operating System (ROS). The project outcome converted existing software assets into ROS using nodes, enabling a robotic Hexapod to communicate to be functional and controlled by an existing PlayStation 3 (PS3) controller. Existing control algorithms and current libraries have no ROS capabilities within the Hexapod C++ source code when the internship started, but that has changed throughout my internship. Conversion of C++ codes to ROS enabled existing code to be compatible with ROS, and is now controlled using an existing PS3 controller. Furthermore, my job description was to design ROS messages and script programs that enabled assets to participate in the ROS ecosystem by subscribing and publishing messages. Software programming source code is written in directories using C++. Testing of software assets included compiling code within the Linux environment using a terminal. The terminal ran the code from a directory. Several problems occurred while compiling code and the code would not compile. So modifying code to where C++ can read the source code were made. Once the code was compiled and ran, the code was uploaded to Hexapod and then controlled by a PS3 controller. The project outcome has the Hexapod fully functional and compatible with ROS and operates using the PlayStation 3 controller. In addition, an open source software (IDE) Arduino board will be integrated into the ecosystem with designing circuitry on a breadboard to add additional behavior with push buttons, potentiometers and other simple elements in the electrical circuitry. Other projects with the Arduino will be a GPS module, digital clock that will run off 22 satellites to show accurate real time using a GPS signal and an internal patch antenna to communicate with satellites. In addition, this internship experience has led me to pursue myself to learn coding more efficiently and effectively to write, subscribe and publish my own source code in different programming languages. With some familiarity with software programming, it will enhance my skills in the electrical engineering field. In contrast, my experience here at JSC with the Simulation and Graphics Branch (ER7) has led me to take my coding skill to be more proficient to increase my knowledge in software programming, and also enhancing my skills in ROS. This knowledge will be taken back to my university to implement coding in a school project that will use source coding and ROS to work on the PR2 robot which is controlled by ROS software. My skills learned here will be used to integrate messages to subscribe and publish ROS messages to a PR2 robot. The PR2 robot will be controlled by an existing PS3 controller by changing C++ coding to subscribe and publish messages to ROS. Overall the skills that were obtained here will not be lost, but increased.

  3. Photovoltaic performance models - A report card

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  4. Marshall Space Flight Center 1990 annual chronology of events

    NASA Technical Reports Server (NTRS)

    Wright, Michael

    1991-01-01

    A chronological listing is provided of the major events for the Marshall Space Flight Center for the calendar year 1990. The MSFC Historian, Management Operations Office, compiled the chronology from various sources and from supplemental information provided by the major MSFC organizations.

  5. U.S. Transit Track Restraining Rail. Volume II : Guidelines.

    DOT National Transportation Integrated Search

    1981-12-01

    This report covers a study of restraining rails in transit track, which is part of the current research program of UMTA and was initiated: (1) to assist in the analysis, design, and maintenance and operation of transit track; (2) to compile guideline...

  6. 45 CFR 612.7 - Exemptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... discretion in the matter; or establishes particular criteria for withholding or refers to particular types of...) Trade secrets, processes, operations, style of work, or apparatus; or the confidential statistical data... compiled to evaluate or adjudicate the suitability of candidates for employment, and the eligibility of...

  7. Marshall Space Flight Center 1989 annual chronology of events

    NASA Technical Reports Server (NTRS)

    Wright, Michael

    1990-01-01

    A chronological listing of the major events for the Marshall Space Flight Center for the calendar year 1989 is provided. The MSFC Historian, Management Operations Office, compiled the chronology from various sources and from supplemental information provided by the major MSFC organizations.

  8. Student Success Center Toolkit

    ERIC Educational Resources Information Center

    Jobs For the Future, 2014

    2014-01-01

    "Student Success Center Toolkit" is a compilation of materials organized to assist Student Success Center directors as they staff, launch, operate, and sustain Centers. The toolkit features materials created and used by existing Centers, such as staffing and budgeting templates, launch materials, sample meeting agendas, and fundraising…

  9. PLATO IV Accountancy Index.

    ERIC Educational Resources Information Center

    Pondy, Dorothy, Comp.

    The catalog was compiled to assist instructors in planning community college and university curricula using the 48 computer-assisted accountancy lessons available on PLATO IV (Programmed Logic for Automatic Teaching Operation) for first semester accounting courses. It contains information on lesson access, lists of acceptable abbreviations for…

  10. Research and Technology Objectives and Plans (RTOP), summary

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A compilation of summary portions of each of the Research and Technology Operating Plans (RTOPS) used for management review and control of research currently in progress throughout NASA is presented. Subject, technical monitor, responsible NASA organization, and RTOP number indexes are included.

  11. TankSIM: A Cryogenic Tank Performance Prediction Program

    NASA Technical Reports Server (NTRS)

    Bolshinskiy, L. G.; Hedayat, A.; Hastings, L. J.; Moder, J. P.; Schnell, A. R.; Sutherlin, S. G.

    2015-01-01

    Developed for predicting the behavior of cryogenic liquids inside propellant tanks under various environmental and operating conditions. Provides a multi-node analysis of pressurization, ullage venting and thermodynamic venting systems (TVS) pressure control using axial jet or spray bar TVS. Allows user to combine several different phases for predicting the liquid behavior for the entire flight mission timeline or part of it. Is a NASA in-house code, based on FORTRAN 90-95 and Intel Visual FORTRAN compiler, but can be used on any other platform (Unix-Linux, Compaq Visual FORTRAN, etc.). The last Version 7, released on December 2014, included detailed User's Manual. Includes the use of several RefPROP subroutines for calculating fluid properties.

  12. Aeronautical engineering: A continuing bibliography with indexes (supplement 294)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This issue of Aeronautical Engineering - A Continuing Bibliography with Indexes lists 590 reports, journal articles, and other documents recently announced in the NASA STI Database. The coverage includes documents on the engineering and theoretical aspect of design, construction, evaluation, testing, operation, and performance of aircraft (including aircraft engines) and associated components, equipment, and systems. It also includes research and development in aerodynamics, aeronautics, and ground support equipment for aeronautical vehicles. The bibliographic series is compiled through the cooperative efforts of the American Institute of Aeronautics and Astronautics (AIAA) and the National Aeronautics and Space Administration (NASA). Seven indexes are included: subject, personal author, corporate source, foreign technology, contract number, report number, and accession number.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Kim, Jungwon; Vetter, Jeffrey S

    This paper presents a directive-based, high-level programming framework for high-performance reconfigurable computing. It takes a standard, portable OpenACC C program as input and generates a hardware configuration file for execution on FPGAs. We implemented this prototype system using our open-source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler. Internally, the design of OpenARC uses a high- level intermediate representation that separates concerns of program representation from underlying architectures, which facilitates portability of OpenARC. In fact, thismore » design allowed us to create the OpenACC-to-FPGA translation framework with minimal extensions to our existing system. In addition, we show that our proposed FPGA-specific compiler optimizations and novel OpenACC pragma extensions assist the compiler in generating more efficient FPGA hardware configuration files. Our empirical evaluation on an Altera Stratix V FPGA with eight OpenACC benchmarks demonstrate the benefits of our strategy. To demonstrate the portability of OpenARC, we show results for the same benchmarks executing on other heterogeneous platforms, including NVIDIA GPUs, AMD GPUs, and Intel Xeon Phis. This initial evidence helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous HPC architectures.« less

  14. An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation

    DOE PAGES

    Nutaro, James

    2014-11-03

    In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.

  15. TLB for Free: In-Cache Address Translation for a Multiprocessor Workstation

    DTIC Science & Technology

    1985-05-13

    LISZT Franz LISP self-compilation I 0.6Mb 145 VAXIMA I Algebraic expert system (a derivative of .MACSY:MA) 1.7Mb 414 CSZOK Two V AXIMA streams...first four were gathered on a VAX running UNIX with an address and instruction tracer [Henr84]. LISZT is the Franz LISP compiler compiling itself...Collisions) (PTE Misses) LISZT 0.584 0.609 0.02.5( 4.3%) (0.009) (0.016) V;\\...’\\lMA 1.855 1.885 0.030(1.6%) (0.004) (0.026) CS100K 2.214 2.260

  16. Optimizing Water Use and Hydropower Production in Operational Reservoir System Scheduling with RiverWare

    NASA Astrophysics Data System (ADS)

    Magee, T. M.; Zagona, E. A.

    2017-12-01

    Practical operational optimization of multipurpose reservoir systems is challenging for several reasons. Each purpose has its own constraints which may conflict with those of other purposes. While hydropower generation typically provides the bulk of the revenue, it is also among the lowest priority purposes. Each river system has important details that are specific to the location such as hydrology, reservoir storage capacity, physical limitations, bottlenecks, and the continuing evolution of operational policy. In addition, reservoir operations models include discrete, nonlinear, and nonconvex physical processes and if-then operating policies. Typically, the forecast horizon for scheduling needs to be extended far into the future to avoid near term (e.g., a few hours or a day) scheduling decisions that result in undesirable future states; this makes the computational effort much larger than may be expected. Put together, these challenges lead to large and customized mathematical optimization problems which must be solved efficiently to be of practical use. In addition, the solution process must be robust in an operational setting. We discuss a unique modeling approach in RiverWare that meets these challenges in an operational setting. The approach combines a Preemptive Linear Goal Programming optimization model to handle prioritized policies complimented by preprocessing and postprocessing with Rulebased Simulation to improve the solution with regard to nonlinearities, discrete issues, and if-then logic. An interactive policy language with a graphical user interface allows modelers to customize both the optimization and simulation based on the unique aspects of the policy for their system while the routine physical aspect of operations are modeled automatically. The modeler is aided by a set of compiled predefined functions and functions shared by other modelers. We illustrate the success of the approach with examples from daily use at the Tennessee Valley Authority, the Bonneville Power Administration, and public utility districts on the Mid-Columbia River. We discuss recent innovations to improve solution quality, robustness, and performance for these systems. We conclude with new modeling challenges to extend the modeling approach to other uses.

  17. Ada Compiler Validation Summary Report: Certificate Number: 880318W1. 09041, International Business Machines Corporation, IBM Development System for the Ada Language, Version 2.1.0, IBM 4381 under VM/HPO, Host and Target

    DTIC Science & Technology

    1988-03-28

    International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host and target DTIC...necessary and identify by block number) International Business Machines Corporation, IBM Development System for the Ada Language, Version 2.1.0, IBM...in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record of the object code of the

  18. A Fortran 90 Hartree-Fock program for one-dimensional periodic π-conjugated systems using Pariser-Parr-Pople model

    NASA Astrophysics Data System (ADS)

    Kondayya, Gundra; Shukla, Alok

    2012-03-01

    Pariser-Parr-Pople (P-P-P) model Hamiltonian is employed frequently to study the electronic structure and optical properties of π-conjugated systems. In this paper we describe a Fortran 90 computer program which uses the P-P-P model Hamiltonian to solve the Hartree-Fock (HF) equation for infinitely long, one-dimensional, periodic, π-electron systems. The code is capable of computing the band structure, as also the linear optical absorption spectrum, by using the tight-binding and the HF methods. Furthermore, using our program the user can solve the HF equation in the presence of a finite external electric field, thereby, allowing the simulation of gated systems. We apply our code to compute various properties of polymers such as trans-polyacetylene, poly- para-phenylene, and armchair and zigzag graphene nanoribbons, in the infinite length limit. Program summaryProgram title: ppp_bulk.x Catalogue identifier: AEKW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 464 No. of bytes in distributed program, including test data, etc.: 2 046 933 Distribution format: tar.gz Programming language: Fortran 90 Computer: PCs and workstations Operating system: Linux, Code was developed and tested on various recent versions of 64-bit Fedora including Fedora 14 (kernel version 2.6.35.12-90). Classification: 7.3 External routines: This program needs to link with LAPACK/BLAS libraries compiled with the same compiler as the program. For the Intel Fortran Compiler we used the ACML library version 4.4.0, while for the gfortran compiler we used the libraries supplied with the Fedora distribution. Nature of problem: The electronic structure of one-dimensional periodic π-conjugated systems is an intense area of research at present because of the tremendous interest in the physics of conjugated polymers and graphene nanoribbons. The computer program described in this paper provides an efficient way of solving the Hartree-Fock equations for such systems within the P-P-P model. In addition to the Bloch orbitals, band structure, and the density of states, the program can also compute quantities such as the linear absorption spectrum, and the electro-absorption spectrum of these systems. Solution method: For a one-dimensional periodic π-conjugated system lying in the xy-plane, the single-particle Bloch orbitals are expressed as linear combinations of p-orbitals of individual atoms. Then using various parameters defining the P-P-P Hamiltonian, the Hartree-Fock equations are set up as a matrix eigenvalue problem in the k-space. Thereby, its solutions are obtained in a self-consistent manner, using the iterative diagonalizing technique at several k points. The band structure and the corresponding Bloch orbitals thus obtained are used to perform a variety of calculations such as the density of states, linear optical absorption spectrum, electro-absorption spectrum, etc. Running time: Most of the examples provided take only a few seconds to run. For a large system, however, depending on the system size, the run time may be a few minutes to a few hours.

  19. National health accounts data from 1996 to 2010: a systematic review

    PubMed Central

    Bui, Anthony L; Lavado, Rouselle F; Johnson, Elizabeth K; Brooks, Benjamin PC; Freeman, Michael K; Graves, Casey M; Haakenstad, Annie; Shoemaker, Benjamin; Hanlon, Michael

    2015-01-01

    Abstract Objective To collect, compile and evaluate publicly available national health accounts (NHA) reports produced worldwide between 1996 and 2010. Methods We downloaded country-generated NHA reports from the World Health Organization global health expenditure database and the Organisation for Economic Co-operation and Development (OECD) StatExtract website. We also obtained reports from Abt Associates, through contacts in individual countries and through an online search. We compiled data in the four main types used in these reports: (i) financing source; (ii) financing agent; (iii) health function; and (iv) health provider. We combined and adjusted data to conform with OECD’s first edition of A system of health accounts manual, (2000). Findings We identified 872 NHA reports from 117 countries containing a total of 2936 matrices for the four data types. Most countries did not provide complete health expenditure data: only 252 of the 872 reports contained data in all four types. Thirty-eight countries reported an average not-specified-by-kind value greater than 20% for all data types and years. Some countries reported substantial year-on-year changes in both the level and composition of health expenditure that were probably produced by data-generation processes. All study data are publicly available at http://vizhub.healthdata.org/nha/. Conclusion Data from NHA reports on health expenditure are often incomplete and, in some cases, of questionable quality. Better data would help finance ministries allocate resources to health systems, assist health ministries in allocating capital within the health sector and enable researchers to make accurate comparisons between health systems. PMID:26478614

  20. Space shuttle low cost/risk avionics study

    NASA Technical Reports Server (NTRS)

    1971-01-01

    All work breakdown structure elements containing any avionics related effort were examined for pricing the life cycle costs. The analytical, testing, and integration efforts are included for the basic onboard avionics and electrical power systems. The design and procurement of special test equipment and maintenance and repair equipment are considered. Program management associated with these efforts is described. Flight test spares and labor and materials associated with the operations and maintenance of the avionics systems throughout the horizontal flight test are examined. It was determined that cost savings can be achieved by using existing hardware, maximizing orbiter-booster commonality, specifying new equipments to MIL quality standards, basing redundancy on cost effective analysis, minimizing software complexity and reducing cross strapping and computer-managed functions, utilizing compilers and floating point computers, and evolving the design as dictated by the horizontal flight test schedules.

  1. Analytical solutions for one-, two-, and three-dimensional solute transport in ground-water systems with uniform flow

    USGS Publications Warehouse

    Wexler, Eliezer J.

    1992-01-01

    Analytical solutions to the advective-dispersive solute-transport equation are useful in predicting the fate of solutes in ground water. Analytical solutions compiled from available literature or derived by the author are presented for a variety of boundary condition types and solute-source configurations in one-, two-, and three-dimensional systems having uniform ground-water flow. A set of user-oriented computer programs was created to evaluate these solutions and to display the results in tabular and computer-graphics format. These programs incorporate many features that enhance their accuracy, ease of use, and versatility. Documentation for the programs describes their operation and required input data, and presents the results of sample problems. Derivations of selected solutions, source codes for the computer programs, and samples of program input and output also are included.

  2. The transformation of aerodynamic stability derivatives by symbolic mathematical computation

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1975-01-01

    The formulation of mathematical models of aeronautical systems for simulation or other purposes, involves the transformation of aerodynamic stability derivatives. It is shown that these derivatives transform like the components of a second order tensor having one index of covariance and one index of contravariance. Moreover, due to the equivalence of covariant and contravariant transformations in orthogonal Cartesian systems of coordinates, the transformations can be treated as doubly covariant or doubly contravariant, if this simplifies the formulation. It is shown that the tensor properties of these derivatives can be used to facilitate their transformation by symbolic mathematical computation, and the use of digital computers equipped with formula manipulation compilers. When the tensor transformations are mechanised in the manner described, man-hours are saved and the errors to which human operators are prone can be avoided.

  3. Life sciences payload definition and integration study. Volume 2: Requirements, design, and planning studies for the carry-on laboratories. [for Spacelab

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The task phase concerned with the requirements, design, and planning studies for the carry-on laboratory (COL) began with a definition of biomedical research areas and candidate research equipment, and then went on to develop conceptual layouts for COL which were each evaluated in order to arrive at a final conceptual design. Each step in this design/evaluation process concerned itself with man/systems integration research and hardware, and life support and protective systems research and equipment selection. COL integration studies were also conducted and include attention to electrical power and data management requirements, operational considerations, and shuttle/Spacelab interface specifications. A COL program schedule was compiled, and a cost analysis was finalized which takes into account work breakdown, annual funding, and cost reduction guidelines.

  4. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  5. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  6. COMPILATION OF CONVERSION COEFFICIENTS FOR THE DOSE TO THE LENS OF THE EYE.

    PubMed

    Behrens, R

    2017-04-28

    A compilation of fluence-to-absorbed dose conversion coefficients for the dose to the lens of the eye is presented. The compilation consists of both previously published data and newly calculated values: photon data (5 keV-50 MeV for both kerma approximation and full electron transport), electron data (10 keV-50 MeV), and positron data (1 keV-50 MeV) - neutron data will be published separately. Values are given for angles of incidence from 0° up to 90° in steps of 15° and for rotational irradiation. The data presented can be downloaded from this article's website and they are ready for use by Report Committee (RC) 26. This committee has been set up by the International Commission on Radiation Units and Measurements (ICRU) and is working on a 'proposal for a redefinition of the operational quantities for external radiation exposure'. © The Author 2016. Published by Oxford University Press.

  7. An Ada programming support environment

    NASA Technical Reports Server (NTRS)

    Tyrrill, AL; Chan, A. David

    1986-01-01

    The toolset of an Ada Programming Support Environment (APSE) being developed at North American Aircraft Operations (NAAO) of Rockwell International, is described. The APSE is resident on three different hosts and must support developments for the hosts and for embedded targets. Tools and developed software must be freely portable between the hosts. The toolset includes the usual editors, compilers, linkers, debuggers, configuration magnagers, and documentation tools. Generally, these are being supplied by the host computer vendors. Other tools, for example, pretty printer, cross referencer, compilation order tool, and management tools were obtained from public-domain sources, are implemented in Ada and are being ported to the hosts. Several tools being implemented in-house are of interest, these include an Ada Design Language processor based on compilable Ada. A Standalone Test Environment Generator facilitates test tool construction and partially automates unit level testing. A Code Auditor/Static Analyzer permits the Ada programs to be evaluated against measures of quality. An Ada Comment Box Generator partially automates generation of header comment boxes.

  8. From OO to FPGA :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Stephen; Palsberg, Jens; Brooks, Jeffrey

    Consumer electronics today such as cell phones often have one or more low-power FPGAs to assist with energy-intensive operations in order to reduce overall energy consumption and increase battery life. However, current techniques for programming FPGAs require people to be specially trained to do so. Ideally, software engineers can more readily take advantage of the benefits FPGAs offer by being able to program them using their existing skills, a common one being object-oriented programming. However, traditional techniques for compiling object-oriented languages are at odds with todays FPGA tools, which support neither pointers nor complex data structures. Open until now ismore » the problem of compiling an object-oriented language to an FPGA in a way that harnesses this potential for huge energy savings. In this paper, we present a new compilation technique that feeds into an existing FPGA tool chain and produces FPGAs with up to almost an order of magnitude in energy savings compared to a low-power microprocessor while still retaining comparable performance and area usage.« less

  9. Development of a microcomputer data base of manufacturing, installation, and operating experience for the NSSS designer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borchers, W.A.; Markowski, E.S.

    1986-01-01

    Future nuclear steam supply systems (NSSSs) will be designed in an environment of powerful micro hardware and software and these systems will be linked by local area networks (LAN). With such systems, individual NSSS designers and design groups will establish and maintain local data bases to replace existing manual files and data sources. One such effort of this type in Combustion Engineering's (C-E's) NSSS engineering organization is the establishment of a data base of historical manufacturing, installation, and operating experience to provide designers with information to improve on current designs and practices. In contrast to large mainframe or minicomputer datamore » bases, which compile industry-wide data, the data base described here is implemented on a microcomputer, is design specific, and contains a level of detail that is of interest to system and component designers. DBASE III, a popular microcomputer data base management software package, is used. In addition to the immediate benefits provided by the data base, the development itself provided a vehicle for identifying procedural and control aspects that need to be addressed in the environment of local microcomputer data bases. This paper describes the data base and provides some observations on the development, use, and control of local microcomputer data bases in a design organization.« less

  10. Secure and Resilient Functional Modeling for Navy Cyber-Physical Systems

    DTIC Science & Technology

    2017-05-24

    Functional Modeling Compiler (SCCT) FM Compiler and Key Performance Indicators (KPI) May 2018 Pending. Model Management Backbone (SCCT) MMB Demonstration...implement the agent- based distributed runtime. - KPIs for single/multicore controllers and temporal/spatial domains. - Integration of the model management ...Distributed Runtime (UCI) Not started. Model Management Backbone (SCCT) Not started. Siemens Corporation Corporate Technology Unrestricted

  11. Water Quality Instructional Resources Information System (IRIS): A Compilation of Abstracts to Water Quality and Water Resources Materials.

    ERIC Educational Resources Information Center

    Office of Water Program Operations (EPA), Cincinnati, OH. National Training and Operational Technology Center.

    Presented is a compilation of over 3,000 abstracts on print and non-print materials related to water quality and water resources education. Entries are included from all levels of governmental sources, private concerns, and educational institutions. Each entry includes: title, author, cross references, descriptors, and availability. (CLS)

  12. Water Quality Instructional Resources Information System (IRIS): A Compilation of Abstracts to Water Quality and Water Resources Materials. Supplement VIII.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus, OH. Information Reference Center for Science, Mathematics, and Environmental Education.

    Compiled are abstracts and indexes to selected print and non-print materials; related to wastewater treatment and water quality education and instruction, as well as materials related to pesticides, hazardous wastes, and public participation. Sources of abstracted/indexed materials include all levels of government, private concerns, and…

  13. Electronic circuits: A compilation. [for electronic equipment in telecommunication

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A compilation containing articles on newly developed electronic circuits and systems is presented. It is divided into two sections: (1) section 1 on circuits and techniques of particular interest in communications technology, and (2) section 2 on circuits designed for a variety of specific applications. The latest patent information available is also given. Circuit diagrams are shown.

  14. ETHERNET BASED EMBEDDED SYSTEM FOR FEL DIAGNOSTICS AND CONTROLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jianxun Yan; Daniel Sexton; Steven Moore

    2006-10-24

    An Ethernet based embedded system has been developed to upgrade the Beam Viewer and Beam Position Monitor (BPM) systems within the free-electron laser (FEL) project at Jefferson Lab. The embedded microcontroller was mounted on the front-end I/O cards with software packages such as Experimental Physics and Industrial Control System (EPICS) and Real Time Executive for Multiprocessor System (RTEMS) running as an Input/Output Controller (IOC). By cross compiling with the EPICS, the RTEMS kernel, IOC device supports, and databases all of these can be downloaded into the microcontroller. The first version of the BPM electronics based on the embedded controller wasmore » built and is currently running in our FEL system. The new version of BPM that will use a Single Board IOC (SBIOC), which integrates with an Field Programming Gate Array (FPGA) and a ColdFire embedded microcontroller, is presently under development. The new system has the features of a low cost IOC, an open source real-time operating system, plug&play-like ease of installation and flexibility, and provides a much more localized solution.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Vetter, Jeffrey S

    Computer architecture experts expect that non-volatile memory (NVM) hierarchies will play a more significant role in future systems including mobile, enterprise, and HPC architectures. With this expectation in mind, we present NVL-C: a novel programming system that facilitates the efficient and correct programming of NVM main memory systems. The NVL-C programming abstraction extends C with a small set of intuitive language features that target NVM main memory, and can be combined directly with traditional C memory model features for DRAM. We have designed these new features to enable compiler analyses and run-time checks that can improve performance and guard againstmore » a number of subtle programming errors, which, when left uncorrected, can corrupt NVM-stored data. Moreover, to enable recovery of data across application or system failures, these NVL-C features include a flexible directive for specifying NVM transactions. So that our implementation might be extended to other compiler front ends and languages, the majority of our compiler analyses are implemented in an extended version of LLVM's intermediate representation (LLVM IR). We evaluate NVL-C on a number of applications to show its flexibility, performance, and correctness.« less

  16. Learning the Art of Electronics

    NASA Astrophysics Data System (ADS)

    Hayes, Thomas C.; Horowitz, Paul

    2016-03-01

    1. DC circuits; 2. RC circuits; 3. Diode circuits; 4. Transistors I; 5. Transistors II; 6. Operational amplifiers I; 7. Operational amplifiers II: nice positive feedback; 8. Operational amplifiers III; 9. Operational amplifiers IV: nasty positive feedback; 10. Operational amplifiers V: PID motor control loop; 11. Voltage regulators; 12. MOSFET switches; 13. Group audio project; 14. Logic gates; 15. Logic compilers, sequential circuits, flip-flops; 16. Counters; 17. Memory: state machines; 18. Analog to digital: phase-locked loop; 19. Microcontrollers and microprocessors I: processor/controller; 20. I/O, first assembly language; 21. Bit operations; 22. Interrupt: ADC and DAC; 23. Moving pointers, serial buses; 24. Dallas Standalone Micro, SiLabs SPI RAM; 25. Toys in the attic; Appendices; Index.

  17. 46 CFR 535.704 - Filing of minutes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... traffic; discussion of revenues, losses, or earnings; or discussion or agreement on service contract... compilation, analytical study, survey, or other work distributed, discussed, or exchanged at the meeting... discussions involve minor operational matters that have little or no impact on the frequency of vessel calls...

  18. Space Mechanisms Lessons Learned Study. Volume 2: Literature Review

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur; Murray, Frank; Howarth, Roy; Fusaro, Robert

    1995-01-01

    Hundreds of satellites have been launched to date. Some have operated extremely well and others have not. In order to learn from past operating experiences, a study was conducted to determine the conditions under which space mechanisms (mechanically moving components) have previously worked or failed. The study consisted of an extensive literature review that included both government contractor reports and technical journals, communication and visits (when necessary) to the various NASA and DOD centers and their designated contractors (this included contact with project managers of current and prior NASA satellite programs as well as their industry counterparts), requests for unpublished information to NASA and industry, and a mail survey designed to acquire specific mechanism experience. The information obtained has been organized into two volumes. Volume 1 provides a summary of the lesson learned, the results of a needs analysis, responses to the mail survey, a listing of experts, a description of some available facilities, and a compilation of references. Volume 2 contains a compilation of the literature review synopsis.

  19. Space Mechanisms Lessons Learned Study. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur; Murray, Frank; Howarth, Roy; Fusaro, Robert

    1995-01-01

    Hundreds of satellites have been launched to date. Some have operated extremely well and others have not. In order to learn from past operating experiences, a study was conducted to determine the conditions under which space mechanisms (mechanically moving components) have previously worked or failed. The study consisted of: (1) an extensive literature review that included both government contractor reports and technical journals; (2) communication and visits (when necessary) to the various NASA and DOD centers and their designated contractors (this included contact with project managers of current and prior NASA satellite programs as well as their industry counterparts); (3) requests for unpublished information to NASA and industry; and (4) a mail survey designed to acquire specific mechanism experience. The information obtained has been organized into two volumes. Volume 1 provides a summary of the lessons learned, the results of a needs analysis, responses to the mail survey, a listing of experts, a description of some available facilities and a compilation of references. Volume 2 contains a compilation of the literature review synopsis.

  20. Simulated Wake Characteristics Data for Closely Spaced Parallel Runway Operations Analysis

    NASA Technical Reports Server (NTRS)

    Guerreiro, Nelson M.; Neitzke, Kurt W.

    2012-01-01

    A simulation experiment was performed to generate and compile wake characteristics data relevant to the evaluation and feasibility analysis of closely spaced parallel runway (CSPR) operational concepts. While the experiment in this work is not tailored to any particular operational concept, the generated data applies to the broader class of CSPR concepts, where a trailing aircraft on a CSPR approach is required to stay ahead of the wake vortices generated by a lead aircraft on an adjacent CSPR. Data for wake age, circulation strength, and wake altitude change, at various lateral offset distances from the wake-generating lead aircraft approach path were compiled for a set of nine aircraft spanning the full range of FAA and ICAO wake classifications. A total of 54 scenarios were simulated to generate data related to key parameters that determine wake behavior. Of particular interest are wake age characteristics that can be used to evaluate both time- and distance- based in-trail separation concepts for all aircraft wake-class combinations. A simple first-order difference model was developed to enable the computation of wake parameter estimates for aircraft models having weight, wingspan and speed characteristics similar to those of the nine aircraft modeled in this work.

  1. High-Performance Design Patterns for Modern Fortran

    DOE PAGES

    Haveraaen, Magne; Morris, Karla; Rouson, Damian; ...

    2015-01-01

    This paper presents ideas for using coordinate-free numerics in modern Fortran to achieve code flexibility in the partial differential equation (PDE) domain. We also show how Fortran, over the last few decades, has changed to become a language well-suited for state-of-the-art software development. Fortran’s new coarray distributed data structure, the language’s class mechanism, and its side-effect-free, pure procedure capability provide the scaffolding on which we implement HPC software. These features empower compilers to organize parallel computations with efficient communication. We present some programming patterns that support asynchronous evaluation of expressions comprised of parallel operations on distributed data. We implemented thesemore » patterns using coarrays and the message passing interface (MPI). We compared the codes’ complexity and performance. The MPI code is much more complex and depends on external libraries. The MPI code on Cray hardware using the Cray compiler is 1.5–2 times faster than the coarray code on the same hardware. The Intel compiler implements coarrays atop Intel’s MPI library with the result apparently being 2–2.5 times slower than manually coded MPI despite exhibiting nearly linear scaling efficiency. As compilers mature and further improvements to coarrays comes in Fortran 2015, we expect this performance gap to narrow.« less

  2. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  3. Data collection and preparation of authoritative reviews on space food and nutrition research

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The collection and classification of information for a manually operated information retrieval system on the subject of space food and nutrition research are described. The system as it currently exists is designed for retrieval of documents, either in hard copy or on microfiche, from the technical files of the MSC Food and Nutrition Section by accession number, author, and/or subject. The system could readily be extended to include retrieval by affiliation, report and contract number, and sponsoring agency should the need arise. It can also be easily converted to computerized retrieval. At present the information retrieval system contains nearly 3000 documents which consist of technical papers, contractors' reports, and reprints obtained from the food and nutrition files at MSC, Technical Library, the library at the Texas Medical Center in Houston, the BMI Technical Libraries, Dr. E. B. Truitt at MBI, and the OSU Medical Libraries. Additional work was done to compile 18 selected bibliographies on subjects of immediate interest on the MSC Food and Nutrition Section.

  4. Materials accounting system for an IBM PC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearse, R.C.; Thomas, R.J.; Henslee, S.P.

    1986-01-01

    We have adapted the Los Alamos MASS accounting system for use on an IBM PC/AT at the Fuels Manufacturing Facility (FMF) at Argonne National Laboratory-West (ANL-WEST) in Idaho Falls, Idaho. Cost of hardware and proprietary software was less than $10,000 per station. The system consists of three stations between which accounting information is transferred using floppy disks accompanying special nuclear material shipments. The programs were implemented in dBASEIII and were compiled using the proprietary software CLIPPER. Modifications to the inventory can be posted in just a few minutes, and operator/computer interaction is nearly instantaneous. After the records are built bymore » the user, it takes 4 to 5 seconds to post the results to the database files. A version of this system was specially adapted and is currently in use at the FMF facility at Argonne National Laboratory in Idaho Falls. Initial satisfaction is adequate and software and hardware problems are minimal.« less

  5. Advanced Air Transportation Technologies Project, Final Document Collection

    NASA Technical Reports Server (NTRS)

    Mogford, Richard H.; Wold, Sheryl (Editor)

    2008-01-01

    This CD ROM contains a compilation of the final documents of the Advanced Air Transportation Technologies (AAIT) project, which was an eight-year (1996 to 2004), $400M project managed by the Airspace Systems Program office, which was part of the Aeronautics Research Mission Directorate at NASA Headquarters. AAIT focused on developing advanced automation tools and air traffic management concepts that would help improve the efficiency of the National Airspace System, while maintaining or enhancing safety. The documents contained in the CD are final reports on AAIT tasks that serve to document the project's accomplishments over its eight-year term. Documents include information on: Advanced Air Transportation Technologies, Autonomous Operations Planner, Collaborative Arrival Planner, Distributed Air/Ground Traffic Management Concept Elements 5, 6, & 11, Direct-To, Direct-To Technology Transfer, Expedite Departure Path, En Route Data Exchange, Final Approach Spacing Tool - (Active and Passive), Multi-Center Traffic Management Advisor, Multi Center Traffic Management Advisor Technology Transfer, Surface Movement Advisor, Surface Management System, Surface Management System Technology Transfer and Traffic Flow Management Research & Development.

  6. SCORE user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.A.

    SABrE is a set of tools to facilitate the development of portable scientific software and to visualize scientific data. As with most constructs, SABRE has a foundation. In this case that foundation is SCORE. SCORE (SABRE CORE) has two main functions. The first and perhaps most important is to smooth over the differences between different C implementations and define the parameters which drive most of the conditional compilations in the rest of SABRE. Secondly, it contains several groups of functionality that are used extensively throughout SABRE. Although C is highly standardized now, that has not always been the case. Roughlymore » speaking C compilers fall into three categories: ANSI standard; derivative of the Portable C Compiler (Kernighan and Ritchie); and the rest. SABRE has been successfully ported to many ANSI and PCC systems. It has never been successfully ported to a system in the last category. The reason is mainly that the ``standard`` C library supplied with such implementations is so far from true ANSI or PCC standard that SABRE would have to include its own version of the standard C library in order to work at all. Even with standardized compilers life is not dead simple. The ANSI standard leaves several crucial points ambiguous as ``implementation defined.`` Under these conditions one can find significant differences in going from one ANSI standard compiler to another. SCORE`s job is to include the requisite standard headers and ensure that certain key standard library functions exist and function correctly (there are bugs in the standard library functions supplied with some compilers) so that, to applications which include the SCORE header(s) and load with SCORE, all C implementations look the same.« less

  7. SCORE user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.A.

    SABrE is a set of tools to facilitate the development of portable scientific software and to visualize scientific data. As with most constructs, SABRE has a foundation. In this case that foundation is SCORE. SCORE (SABRE CORE) has two main functions. The first and perhaps most important is to smooth over the differences between different C implementations and define the parameters which drive most of the conditional compilations in the rest of SABRE. Secondly, it contains several groups of functionality that are used extensively throughout SABRE. Although C is highly standardized now, that has not always been the case. Roughlymore » speaking C compilers fall into three categories: ANSI standard; derivative of the Portable C Compiler (Kernighan and Ritchie); and the rest. SABRE has been successfully ported to many ANSI and PCC systems. It has never been successfully ported to a system in the last category. The reason is mainly that the standard'' C library supplied with such implementations is so far from true ANSI or PCC standard that SABRE would have to include its own version of the standard C library in order to work at all. Even with standardized compilers life is not dead simple. The ANSI standard leaves several crucial points ambiguous as implementation defined.'' Under these conditions one can find significant differences in going from one ANSI standard compiler to another. SCORE's job is to include the requisite standard headers and ensure that certain key standard library functions exist and function correctly (there are bugs in the standard library functions supplied with some compilers) so that, to applications which include the SCORE header(s) and load with SCORE, all C implementations look the same.« less

  8. A Language for Specifying Compiler Optimizations for Generic Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcock, Jeremiah J.

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less

  9. Virtual Machine Language

    NASA Technical Reports Server (NTRS)

    Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John

    2005-01-01

    Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.

  10. Evaluation of arctic multibeam sonar data quality using nadir crossover error analysis and compilation of a full-resolution data product

    NASA Astrophysics Data System (ADS)

    Flinders, Ashton F.; Mayer, Larry A.; Calder, Brian A.; Armstrong, Andrew A.

    2014-05-01

    We document a new high-resolution multibeam bathymetry compilation for the Canada Basin and Chukchi Borderland in the Arctic Ocean - United States Arctic Multibeam Compilation (USAMBC Version 1.0). The compilation preserves the highest native resolution of the bathymetric data, allowing for more detailed interpretation of seafloor morphology than has been previously possible. The compilation was created from multibeam bathymetry data available through openly accessible government and academic repositories. Much of the new data was collected during dedicated mapping cruises in support of the United States effort to map extended continental shelf regions beyond the 200 nm Exclusive Economic Zone. Data quality was evaluated using nadir-beam crossover-error statistics, making it possible to assess the precision of multibeam depth soundings collected from a wide range of vessels and sonar systems. Data were compiled into a single high-resolution grid through a vertical stacking method, preserving the highest quality data source in any specific grid cell. The crossover-error analysis and method of data compilation can be applied to other multi-source multibeam data sets, and is particularly useful for government agencies targeting extended continental shelf regions but with limited hydrographic capabilities. Both the gridded compilation and an easily distributed geospatial PDF map are freely available through the University of New Hampshire's Center for Coastal and Ocean Mapping (ccom.unh.edu/theme/law-sea). The geospatial pdf is a full resolution, small file-size product that supports interpretation of Arctic seafloor morphology without the need for specialized gridding/visualization software.

  11. Ground Operations Aerospace Language (GOAL). Volume 1: Study overview

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A series of NASA and Contractor studies sponsored by NASA/KSC resulted in a specification for the Ground Operations Aerospace Language (GOAL). The Cape Kennedy Facility of the IBM Corporation was given the responsibility, under existing contracts, to perform an analysis of the Language Specification, to design and develop a GOAL Compiler, to provide a specification for a data bank, to design and develop an interpretive code translator, and to perform associated application studies.

  12. Geographic information system (GIS) compilation of geophysical, geologic, and tectonic data for the Circum-North Pacific

    USGS Publications Warehouse

    Greninger, Mark L.; Klemperer, Simon L.; Nokleberg, Warren J.

    1999-01-01

    The accompanying directory structure contains a Geographic Information Systems (GIS) compilation of geophysical, geological, and tectonic data for the Circum-North Pacific. This area includes the Russian Far East, Alaska, the Canadian Cordillera, linking continental shelves, and adjacent oceans. This GIS compilation extends from 120?E to 115?W, and from 40?N to 80?N. This area encompasses: (1) to the south, the modern Pacific plate boundary of the Japan-Kuril and Aleutian subduction zones, the Queen Charlotte transform fault, and the Cascadia subduction zone; (2) to the north, the continent-ocean transition from the Eurasian and North American continents to the Arctic Ocean; (3) to the west, the diffuse Eurasian-North American plate boundary, including the probable Okhotsk plate; and (4) to the east, the Alaskan-Canadian Cordilleran fold belt. This compilation should be useful for: (1) studying the Mesozoic and Cenozoic collisional and accretionary tectonics that assembled this continental crust of this region; (2) studying the neotectonics of active and passive plate margins in this region; and (3) constructing and interpreting geophysical, geologic, and tectonic models of the region. Geographic Information Systems (GIS) programs provide powerful tools for managing and analyzing spatial databases. Geological applications include regional tectonics, geophysics, mineral and petroleum exploration, resource management, and land-use planning. This CD-ROM contains thematic layers of spatial data-sets for geology, gravity field, magnetic field, oceanic plates, overlap assemblages, seismology (earthquakes), tectonostratigraphic terranes, topography, and volcanoes. The GIS compilation can be viewed, manipulated, and plotted with commercial software (ArcView and ArcInfo) or through a freeware program (ArcExplorer) that can be downloaded from http://www.esri.com for both Unix and Windows computers using the button below.

  13. Design and Development of Functionally Effective Human-Machine Interfaces for Firing Room Displays

    NASA Technical Reports Server (NTRS)

    Cho, Henry

    2013-01-01

    This project involves creating software for support equipment used on the Space Launch System (SLS). The goal is to create applications and displays that will be used to remotely operate equipment from the firing room and will continue to support the SLS launch vehicle to the extent of its program. These displays include design practices that help to convey information effectively, such as minimizing distractions at normal operating state and displaying intentional distractions during a warning or alarm state. The general practice for creating an operator display is to reduce the detail of unimportant aspects of the display and promote focus on data and dynamic information. These practices include using minimalist design, using muted tones for background colors, using a standard font at a readable text size, displaying alarms visible for immediate attention, grouping data logically, and displaying data appropriately varying on the type of data. Users of these displays are more likely to stay focused on operating for longer periods by using design practices that reduce eye strain and fatigue. Effective operator displays will improve safety by reducing human errors during operation, which will help prevent catastrophic accidents. This report entails the details of my work on developing remote displays for the Hypergolic fuel servicing system. Before developing a prototype display, the design and requirements of the system are outlined and compiled into a document. Then each subsystem has schematic representations drawn that meet the specifications detailed in the document. The schematics are then used as the outline to create display representations of each subsystem. Each display is first tested individually. Then the displays are integrated with a prototype of the master system, and they are tested in a simulated environment then retested in the real environment. Extensive testing is important to ensure the displays function reliably as intended.

  14. Design and Development of Functionally Effective Human-Machine Interfaces for Firing Room Displays

    NASA Technical Reports Server (NTRS)

    Cho, Henry

    2013-01-01

    This project involves creating software for support equipment used on the Space l aunch System (SLS). The goal is to create applications and displays that will be used to remotely operate equipment from the firing room and will continue to support the SLS launch vehicle to the extent of its program. These displays include design practices that help to convey information effectively, such as minimizing distractions at normal operating state and displaying intentional distractions during a warning or alarm state. The general practice for creating an operator display is to reduce the detail of unimportant aspects of the display and promote focus on data and dynamic information. These practices include using minimalist design, using muted tones for background colors, using a standard font at a readable text size, displaying alarms visible for Immediate attention, grouping data logically, and displaying data appropriately varying on the type of data. Users of these displays are more likely to stay focused on operating for longer periods by using design practices that reduce eye strain and fatigue. Effective operator displays will improve safety by reducing human errors during operation, which will help prevent catastrophic accidents. This report entails the details of my work on developing remote displays for the Hypergolics ground system. Before developing a prototype display, the design and requirements of the system are outlined and compiled into a document. Then each subsystem has schematic representations drawn tha.t meet the specifications detailed in the document. The schematics are then used as the outline to create display representations of each subsystem. Each display is first tested individually. Then the displays are integrated with a prototype of the master system, and they are tested in a simulated environment then retested in the real environment. Extensive testing is important to ensure the displays function reliably as intended.

  15. A compilation of reports of the Advisory Committee on reactor safeguards. 1996 Annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-04-01

    This compilation contains 47 ACRS reports submitted to the Commission, or to the Executive Director for Operations, during calendar year 1996. It also includes a report to the Congress on the NRC Safety Research Program. All reports have been made available to the public through the NRC Public Document Room, the U.S. Library of Congress, and the Internet at http://www.nrc.gov/ACRSACNW. The reports are divided into two groups: Part 1 contains ACRS reports by project name and by chronological order within project name. Part 2 categorizes the reports by the most appropriate generic subject area and by chronological order within subjectmore » area.« less

  16. Compilation and Review of Supersonic Business Jet Studies from 1963 through 1995

    NASA Technical Reports Server (NTRS)

    Maglieri, Domenic J.

    2011-01-01

    This document provides a compilation of all known supersonic business jet studies/activities conducted from 1963 through 1995 by university, industry and the NASA. First, an overview is provided which chronologically displays all known supersonic business jet studies/activities conducted by universities, industry, and the NASA along with the key features of the study vehicles relative to configuration, planform, operation parameters, and the source of study. This is followed by a brief description of each study along with some comments on the study. Mention will be made as to whether the studies addressed cost, market needs, and the environmental issues of airport-community noise, sonic boom, and ozone.

  17. Hydrocephalus as a rare compilation of vertebrobasilar dolichoectasia: A case report and review of the literature.

    PubMed

    Ebrahimzadeh, Keveh; Bakhtevari, Mehrdad H; Shafizad, Misagh; Rezaei, Omidvar

    2017-01-01

    Vertebrobasilar dolichoectasia (VBD) is a rare disease characterized by significant expansion, elongation, and tortuosity of the vertebrobasilar arteries. Hydrocephalus is a rare compilation of VBD. In this study, we report a 68-year-old male presenting with headache, progressive decreased visual acuity, memory loss, imbalance while walking, and episodes of urinary incontinency. The patient was diagnosed with dolichoectasia of basilar artery causing compression of the third ventricular outflow and thus, presenting with hydrocephalus documented with brain computed tomography scan and brain magnetic resonance imaging. The patient underwent surgical operation and ventriculoperitoneal shunt placement. In the case of hydrocephalus or normal pressure hydrocephalous, VBD should be considered as a differential diagnosis.

  18. The MSRC ab initio methods benchmark suite: A measurement of hardware and software performance in the area of electronic structure methods

    NASA Astrophysics Data System (ADS)

    Feller, D. F.

    1993-07-01

    This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory's Molecular Science Research Center in late 1992 and early 1993. The 'snapshot' nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.

  19. NASA GRC UAS Project: Communications Modeling and Simulation Status

    NASA Technical Reports Server (NTRS)

    Kubat, Greg

    2013-01-01

    The integration of Unmanned Aircraft Systems (UAS) in the National Airspace represents new operational concepts required in civil aviation. These new concepts are evolving as the nation moves toward the Next Generation Air Transportation System (NextGen) under the leadership of the Joint Planning and Development Office (JPDO), and through ongoing work by the Federal Aviation Administration (FAA). The desire and ability to fly UAS in the National Air Space (NAS) in the near term has increased dramatically, and this multi-agency effort to develop and implement a national plan to successfully address the challenges of UAS access to the NAS in a safe and timely manner is well underway. As part of the effort to integrate UAS in the National Airspace, NASA Glenn Research Center is currently involved with providing research into Communications systems and Communication system operations in order to assist with developing requirements for this implementation. In order to provide data and information regarding communication systems performance that will be necessary, NASA GRC is tasked with developing and executing plans for simulations of candidate future UAS command and control communications, in line with architectures and communications technologies being developed and/or proposed by NASA and relevant aviation organizations (in particular, RTCA SC-203). The simulations and related analyses will provide insight into the ability of proposed communications technologies and system architectures to enable safe operation of UAS, meeting UAS in the NAS project goals (including performance requirements, scalability, and interoperability), and ultimately leading to a determination of the ability of NextGen communication systems to accommodate UAS. This presentation, compiled by the NASA GRC team, will provide a view of the overall planned simulation effort and objectives, a description of the simulation concept and status of the design and development that has occurred to date.

  20. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.

Top