Sample records for computer software configuration

  1. 78 FR 47014 - Configuration Management Plans for Digital Computer Software Used in Safety Systems of Nuclear...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION... Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses, with clarifications... Electrical and Electronic Engineers (IEEE) Standard 828-2005, ``IEEE Standard for Software Configuration...

  2. Microcomputer software development facilities

    NASA Technical Reports Server (NTRS)

    Gorman, J. S.; Mathiasen, C.

    1980-01-01

    A more efficient and cost effective method for developing microcomputer software is to utilize a host computer with high-speed peripheral support. Application programs such as cross assemblers, loaders, and simulators are implemented in the host computer for each of the microcomputers for which software development is a requirement. The host computer is configured to operate in a time share mode for multiusers. The remote terminals, printers, and down loading capabilities provided are based on user requirements. With this configuration a user, either local or remote, can use the host computer for microcomputer software development. Once the software is developed (through the code and modular debug stage) it can be downloaded to the development system or emulator in a test area where hardware/software integration functions can proceed. The microcomputer software program sources reside in the host computer and can be edited, assembled, loaded, and then downloaded as required until the software development project has been completed.

  3. 78 FR 47804 - Verification, Validation, Reviews, and Audits for Digital Computer Software Used in Safety...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-06

    ..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...

  4. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    NASA Technical Reports Server (NTRS)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  5. Space shuttle orbiter avionics software: Post review report for the entry FACI (First Article Configuration Inspection). [including orbital flight tests integrated system

    NASA Technical Reports Server (NTRS)

    Markos, H.

    1978-01-01

    Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.

  6. Informatics in Radiology (infoRAD): personal computer security: part 2. Software Configuration and file protection.

    PubMed

    Caruso, Ronald D

    2004-01-01

    Proper configuration of software security settings and proper file management are necessary and important elements of safe computer use. Unfortunately, the configuration of software security options is often not user friendly. Safe file management requires the use of several utilities, most of which are already installed on the computer or available as freeware. Among these file operations are setting passwords, defragmentation, deletion, wiping, removal of personal information, and encryption. For example, Digital Imaging and Communications in Medicine medical images need to be anonymized, or "scrubbed," to remove patient identifying information in the header section prior to their use in a public educational or research environment. The choices made with respect to computer security may affect the convenience of the computing process. Ultimately, the degree of inconvenience accepted will depend on the sensitivity of the files and communications to be protected and the tolerance of the user. Copyright RSNA, 2004

  7. Spacelab experiment computer study. Volume 1: Executive summary (presentation)

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.; Hodges, B. C.; Christy, J. O.

    1976-01-01

    A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.

  8. Advanced Software Techniques for Data Management Systems. Volume 2: Space Shuttle Flight Executive System: Functional Design

    NASA Technical Reports Server (NTRS)

    Pepe, J. T.

    1972-01-01

    A functional design of software executive system for the space shuttle avionics computer is presented. Three primary functions of the executive are emphasized in the design: task management, I/O management, and configuration management. The executive system organization is based on the applications software and configuration requirements established during the Phase B definition of the Space Shuttle program. Although the primary features of the executive system architecture were derived from Phase B requirements, it was specified for implementation with the IBM 4 Pi EP aerospace computer and is expected to be incorporated into a breadboard data management computer system at NASA Manned Spacecraft Center's Information system division. The executive system was structured for internal operation on the IBM 4 Pi EP system with its external configuration and applications software assumed to the characteristic of the centralized quad-redundant avionics systems defined in Phase B.

  9. Communication devices for network-hopping communications and methods of network-hopping communications

    DOEpatents

    Buttles, John W [Idaho Falls, ID

    2011-12-20

    Wireless communication devices include a software-defined radio coupled to processing circuitry. The processing circuitry is configured to execute computer programming code. Storage media is coupled to the processing circuitry and includes computer programming code configured to cause the processing circuitry to configure and reconfigure the software-defined radio to operate on each of a plurality of communication networks according to a selected sequence. Methods for communicating with a wireless device and methods of wireless network-hopping are also disclosed.

  10. Communication devices for network-hopping communications and methods of network-hopping communications

    DOEpatents

    Buttles, John W

    2013-04-23

    Wireless communication devices include a software-defined radio coupled to processing circuitry. The system controller is configured to execute computer programming code. Storage media is coupled to the system controller and includes computer programming code configured to cause the system controller to configure and reconfigure the software-defined radio to operate on each of a plurality of communication networks according to a selected sequence. Methods for communicating with a wireless device and methods of wireless network-hopping are also disclosed.

  11. Windows VPN Set Up | High-Performance Computing | NREL

    Science.gov Websites

    it in your My Documents folder Configure the client software using that conf file Start the TEXT NEEDED Configure the Client Software Start the Endian Connect App. You'll configure the connection using the hpcvpn-win.conf file, uncheck the "save password" link, and add your UserID. Start

  12. Music Learning in Your School Computer Lab.

    ERIC Educational Resources Information Center

    Reese, Sam

    1998-01-01

    States that a growing number of schools are installing general computer labs equipped to use notation, accompaniment, and sequencing software independent of MIDI keyboards. Discusses (1) how to configure the software without MIDI keyboards or external sound modules, (2) using the actual MIDI software, (3) inexpensive enhancements, and (4) the…

  13. EOS MLS Level 2 Data Processing Software Version 3

    NASA Technical Reports Server (NTRS)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  14. Support for Diagnosis of Custom Computer Hardware

    NASA Technical Reports Server (NTRS)

    Molock, Dwaine S.

    2008-01-01

    The Coldfire SDN Diagnostics software is a flexible means of exercising, testing, and debugging custom computer hardware. The software is a set of routines that, collectively, serve as a common software interface through which one can gain access to various parts of the hardware under test and/or cause the hardware to perform various functions. The routines can be used to construct tests to exercise, and verify the operation of, various processors and hardware interfaces. More specifically, the software can be used to gain access to memory, to execute timer delays, to configure interrupts, and configure processor cache, floating-point, and direct-memory-access units. The software is designed to be used on diverse NASA projects, and can be customized for use with different processors and interfaces. The routines are supported, regardless of the architecture of a processor that one seeks to diagnose. The present version of the software is configured for Coldfire processors on the Subsystem Data Node processor boards of the Solar Dynamics Observatory. There is also support for the software with respect to Mongoose V, RAD750, and PPC405 processors or their equivalents.

  15. Computer software configuration description, 241-AY and 241 AZ tank farm MICON automation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkelman, W.D.

    This document describes the configuration process, choices and conventions used during the Micon DCS configuration activities, and issues involved in making changes to the configuration. Includes the master listings of the Tag definitions, which should be revised to authorize any changes. Revision 3 provides additional information on the software used to provide communications with the W-320 project and incorporates minor changes to ensure the document alarm setpoint priorities correctly match operational expectations.

  16. Iteration and Prototyping in Creating Technical Specifications.

    ERIC Educational Resources Information Center

    Flynt, John P.

    1994-01-01

    Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)

  17. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  18. Rapid Airplane Parametric Input Design (RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1995-01-01

    RAPID is a methodology and software system to define a class of airplane configurations and directly evaluate surface grids, volume grids, and grid sensitivity on and about the configurations. A distinguishing characteristic which separates RAPID from other airplane surface modellers is that the output grids and grid sensitivity are directly applicable in CFD analysis. A small set of design parameters and grid control parameters govern the process which is incorporated into interactive software for 'real time' visual analysis and into batch software for the application of optimization technology. The computed surface grids and volume grids are suitable for a wide range of Computational Fluid Dynamics (CFD) simulation. The general airplane configuration has wing, fuselage, horizontal tail, and vertical tail components. The double-delta wing and tail components are manifested by solving a fourth order partial differential equation (PDE) subject to Dirichlet and Neumann boundary conditions. The design parameters are incorporated into the boundary conditions and therefore govern the shapes of the surfaces. The PDE solution yields a smooth transition between boundaries. Surface grids suitable for CFD calculation are created by establishing an H-type topology about the configuration and incorporating grid spacing functions in the PDE equation for the lifting components and the fuselage definition equations. User specified grid parameters govern the location and degree of grid concentration. A two-block volume grid about a configuration is calculated using the Control Point Form (CPF) technique. The interactive software, which runs on Silicon Graphics IRIS workstations, allows design parameters to be continuously varied and the resulting surface grid to be observed in real time. The batch software computes both the surface and volume grids and also computes the sensitivity of the output grid with respect to the input design parameters by applying the precompiler tool ADIFOR to the grid generation program. The output of ADIFOR is a new source code containing the old code plus expressions for derivatives of specified dependent variables (grid coordinates) with respect to specified independent variables (design parameters). The RAPID methodology and software provide a means of rapidly defining numerical prototypes, grids, and grid sensitivity of a class of airplane configurations. This technology and software is highly useful for CFD research for preliminary design and optimization processes.

  19. Educational Technology: Best Practices from America's Schools.

    ERIC Educational Resources Information Center

    Bozeman, William C.; Baumbach, Donna J.

    This book begins with an overview of computer technology concepts, including computer system configurations, computer communications, and software. Instructional computer applications are then discussed; topics include computer-assisted instruction, computer-managed instruction, computer-enhanced instruction, LOGO, authoring programs, presentation…

  20. Computer software configuration description, 241-AY and 241-AZ tank farm MICON automation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkelman, W.D.

    This document describes the configuration process, choices and conventions used during the configuration activities, and issues involved in making changes to the configuration. Includes the master listings of the Tag definitions, which should be revised to authorize any changes. Revision 2 incorporates minor changes to ensure the document setpoints accurately reflect limits (including exhaust stack flow of 800 scfm) established in OSD-T-151-00019. The MICON DCS software controls and monitors the instrumentation and equipment associated with plant systems and processes.

  1. SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects

    NASA Technical Reports Server (NTRS)

    Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M

    1998-01-01

    SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.

  2. Architecutres, Models, Algorithms, and Software Tools for Configurable Computing

    DTIC Science & Technology

    2000-03-06

    and J.G. Nash. The gated interconnection network for dynamic programming. Plenum, 1988 . [18] Ju wook Jang, Heonchul Park, and Viktor K. Prasanna. A ...Sep. 1997. [2] C. Ebeling, D. C. Cronquist , P. Franklin and C. Fisher, "RaPiD - A configurable computing architecture for compute-intensive...ABSTRACT (Maximum 200 words) The Models, Algorithms, and Architectures for Reconfigurable Computing (MAARC) project developed a sound framework for

  3. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  4. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  5. Apple OS X VPN Set Up | High-Performance Computing | NREL

    Science.gov Websites

    software using that conf file and your UserID Start the connection using your password plus the 6-digit OTP . Configure the Client Software Start the Endian Connect App (It should have installed into Applications in an password" link, and add your UserID. Start the app, and begin configuring the connection by clicking

  6. Software Engineering Guidebook

    NASA Technical Reports Server (NTRS)

    Connell, John; Wenneson, Greg

    1993-01-01

    The Software Engineering Guidebook describes SEPG (Software Engineering Process Group) supported processes and techniques for engineering quality software in NASA environments. Three process models are supported: structured, object-oriented, and evolutionary rapid-prototyping. The guidebook covers software life-cycles, engineering, assurance, and configuration management. The guidebook is written for managers and engineers who manage, develop, enhance, and/or maintain software under the Computer Software Services Contract.

  7. Hypercluster Parallel Processor

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela

    1992-01-01

    Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.

  8. Automated airplane surface generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.E.; Cordero, Y.; Jones, W.

    1996-12-31

    An efficient methodology and software axe presented for defining a class of airplane configurations. A small set of engineering design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tall, horizontal tail, and canard components. Wing, canard, and tail surface grids axe manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage is described by an algebraic function with four design parameters. The computed surface grids are suitablemore » for a wide range of Computational Fluid Dynamics simulation and configuration optimizations. Both batch and interactive software are discussed for applying the methodology.« less

  9. Inviscid Flow Computations of Several Aeroshell Configurations for a '07 Mars Lander

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    2001-01-01

    This report documents the results of an inviscid computational study conducted on several candidate aeroshell configurations for a proposed '07 Mars lander. Eleven different configurations were considered, and the aerodynamic characteristics of each of these were computed for a Mach number of 23.7 at 10, 15, and 20 degree angles of attack. The unstructured grid software FELISA with the equilibrium Mars gas option was used for these computations. The pitching moment characteristics and the lift-to-drag ratios at trim angle of attack of each of these configurations were examined to make a selection. The criterion for selection was that the configuration should be longitudinally stable, and should trim at an angle of attack where the L/D is -0.25. Based on the present study, two configurations were selected for further study

  10. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  11. Information management advanced development. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Gerber, C. R.

    1972-01-01

    The information management systems designed for the modular space station are discussed. Subjects presented are: (1) communications terminal breadboard configuration, (2) digital data bus breadboard configuration, (3) data processing assembly definition, and (4) computer program (software) assembly definition.

  12. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.

  13. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  14. CPE--A New Perspective: The Impact of the Technology Revolution. Proceedings of the Computer Performance Evaluation Users Group Meeting (19th, San Francisco, California, October 25-28, 1983). Final Report. Reports on Computer Science and Technology.

    ERIC Educational Resources Information Center

    Mobray, Deborah, Ed.

    Papers on local area networks (LANs), modelling techniques, software improvement, capacity planning, software engineering, microcomputers and end user computing, cost accounting and chargeback, configuration and performance management, and benchmarking presented at this conference include: (1) "Theoretical Performance Analysis of Virtual…

  15. Advanced communications technology satellite high burst rate link evaluation terminal communication protocol software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.

  16. Software Quality Metrics: A Software Management Monitoring Method for Air Force Logistics Command in Its Software Quality Assurance Program for the Quantitative Assessment of the System Development Life Cycle under Configuration Management.

    DTIC Science & Technology

    1982-03-01

    pilot systems. Magnitude of the mutant error is classified as: o Program does not compute. o Program computes but does not run test data. o Program...14 Test and Integration ... ............ .. 105 15 The Mapping of SQM to the SDLC ........ ... 108 16 ADS Development .... .............. . 224 17...and funds. While the test phase concludes the normal development cycle, one should realize that with software the development continues in the

  17. A Scalable Software Architecture Booting and Configuring Nodes in the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    The Whitney project is integrating commodity off-the-shelf PC hardware and software technology to build a parallel supercomputer with hundreds to thousands of nodes. To build such a system, one must have a scalable software model, and the installation and maintenance of the system software must be completely automated. We describe the design of an architecture for booting, installing, and configuring nodes in such a system with particular consideration given to scalability and ease of maintenance. This system has been implemented on a 40-node prototype of Whitney and is to be used on the 500 processor Whitney system to be built in 1998.

  18. Making automated computer program documentation a feature of total system design

    NASA Technical Reports Server (NTRS)

    Wolf, A. W.

    1970-01-01

    It is pointed out that in large-scale computer software systems, program documents are too often fraught with errors, out of date, poorly written, and sometimes nonexistent in whole or in part. The means are described by which many of these typical system documentation problems were overcome in a large and dynamic software project. A systems approach was employed which encompassed such items as: (1) configuration management; (2) standards and conventions; (3) collection of program information into central data banks; (4) interaction among executive, compiler, central data banks, and configuration management; and (5) automatic documentation. A complete description of the overall system is given.

  19. Tools for Administration of a UNIX-Based Network

    NASA Technical Reports Server (NTRS)

    LeClaire, Stephen; Farrar, Edward

    2004-01-01

    Several computer programs have been developed to enable efficient administration of a large, heterogeneous, UNIX-based computing and communication network that includes a variety of computers connected to a variety of subnetworks. One program provides secure software tools for administrators to create, modify, lock, and delete accounts of specific users. This program also provides tools for users to change their UNIX passwords and log-in shells. These tools check for errors. Another program comprises a client and a server component that, together, provide a secure mechanism to create, modify, and query quota levels on a network file system (NFS) mounted by use of the VERITAS File SystemJ software. The client software resides on an internal secure computer with a secure Web interface; one can gain access to the client software from any authorized computer capable of running web-browser software. The server software resides on a UNIX computer configured with the VERITAS software system. Directories where VERITAS quotas are applied are NFS-mounted. Another program is a Web-based, client/server Internet Protocol (IP) address tool that facilitates maintenance lookup of information about IP addresses for a network of computers.

  20. Beamforming strategy of ULA and UCA sensor configuration in multistatic passive radar

    NASA Astrophysics Data System (ADS)

    Hossa, Robert

    2009-06-01

    A Beamforming Network (BN) concept of Uniform Linear Array (ULA) and Uniform Circular Array (UCA) dipole configuration designed to multistatic passive radar is considered in details. In the case of UCA configuration, computationally efficient procedure of beamspace transformation from UCA to virtual ULA configuration with omnidirectional coverage is utilized. If effect, the idea of the proposed solution is equivalent to the techniques of antenna array factor shaping dedicated to ULA structure. Finally, exemplary results from the computer software simulations of elaborated spatial filtering solutions to reference and surveillance channels are provided and discussed.

  1. Project W-211, initial tank retrieval systems, retrieval control system software configuration management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RIECK, C.A.

    1999-02-23

    This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive designmore » package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization.« less

  2. Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds

    NASA Astrophysics Data System (ADS)

    Li, Rui; Chen, Lei; Li, Wen-Syan

    Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.

  3. Color graphics, interactive processing, and the supercomputer

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, Rudeen

    1987-01-01

    The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.

  4. YAMM - Yet Another Menu Manager

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Weidner, Richard J.

    1991-01-01

    Yet Another Menu Manager (YAMM) computer program an application-independent menuing package of software designed to remove much difficulty and save much time inherent in implementation of front ends of large packages of software. Provides complete menuing front end for wide variety of applications, with provisions for independence from specific types of terminals, configurations that meet specific needs of users, and dynamic creation of menu trees. Consists of two parts: description of menu configuration and body of application code. Written in C.

  5. Software Description for the O’Hare Runway Configuration Management System. Volume I. Technical Description,

    DTIC Science & Technology

    1982-10-01

    spent in preparing this document. 00. EXECUTIVE SUMMARY The O’Hare Runway Configuration Management System (CMS) is an interactive multi-user computer ...MITRE Washington’s Computer Center. Currently, CMS is housed in an IBM 4341 computer with VM/SP operating system. CMS employs the IBM’s Display...iV 0O, o 0 .r4L /~ wA 0U 00 00 0 w vi O’Hare, it will operate on a dedicated mini- computer which permits multi-tasking (that is, multiple users

  6. The Macintosh Based Design Studio.

    ERIC Educational Resources Information Center

    Earle, Daniel W., Jr.

    1988-01-01

    Describes the configuration of a workstation for a college design studio based on the Macintosh Plus microcomputer. Highlights include cost estimates, computer hardware peripherals, computer aided design software, networked studios, and potentials for new approaches to design activity in the computer based studio of the future. (Author/LRW)

  7. Improvements to information management systems simulator

    NASA Technical Reports Server (NTRS)

    Bilek, R. W.

    1972-01-01

    The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.

  8. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  9. Sending Learning Pills to Mobile Devices in Class to Enhance Student Performance and Motivation in Network Services Configuration Courses

    ERIC Educational Resources Information Center

    Munoz-Organero, M.; Munoz-Merino, P. J.; Kloos, C. D.

    2012-01-01

    Teaching electrical and computer software engineers how to configure network services normally requires the detailed presentation of many configuration commands and their numerous parameters. Students tend to find it difficult to maintain acceptable levels of motivation. In many cases, this results in their not attending classes and not dedicating…

  10. Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin

    This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.

  11. A digital controller for variable thrust liquid rocket engines

    NASA Astrophysics Data System (ADS)

    Feng, X.; Zhang, Y. L.; Chen, Q. Z.

    1993-06-01

    The paper describes the design and development of a built-in digital controller (BDC) for the variable thrust liquid rocket engine (VTLRE). Particular attention is given to the function requirements of the BDC, the hardware and software configuration, and the testing process, as well as to the VTLRE real-time computer simulation system used for the development of the BDC. A diagram of the VLTRE control system is presented as well as block diagrams illustrating the hardware and software configuration of the BDC.

  12. Assessment of the Unstructured Grid Software TetrUSS for Drag Prediction of the DLR-F4 Configuration

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Frink, Neal T.

    2002-01-01

    An application of the NASA unstructured grid software system TetrUSS is presented for the prediction of aerodynamic drag on a transport configuration. The paper briefly describes the underlying methodology and summarizes the results obtained on the DLR-F4 transport configuration recently presented in the first AIAA computational fluid dynamics (CFD) Drag Prediction Workshop. TetrUSS is a suite of loosely coupled unstructured grid CFD codes developed at the NASA Langley Research Center. The meshing approach is based on the advancing-front and the advancing-layers procedures. The flow solver employs a cell-centered, finite volume scheme for solving the Reynolds Averaged Navier-Stokes equations on tetrahedral grids. For the present computations, flow in the viscous sublayer has been modeled with an analytical wall function. The emphasis of the paper is placed on the practicality of the methodology for accurately predicting aerodynamic drag data.

  13. Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1997-01-01

    A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.

  14. Real-Time Monitoring of Scada Based Control System for Filling Process

    NASA Astrophysics Data System (ADS)

    Soe, Aung Kyaw; Myint, Aung Naing; Latt, Maung Maung; Theingi

    2008-10-01

    This paper is a design of real-time monitoring for filling system using Supervisory Control and Data Acquisition (SCADA). The monitoring of production process is described in real-time using Visual Basic.Net programming under Visual Studio 2005 software without SCADA software. The software integrators are programmed to get the required information for the configuration screens. Simulation of components is expressed on the computer screen using parallel port between computers and filling devices. The programs of real-time simulation for the filling process from the pure drinking water industry are provided.

  15. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  16. An Inviscid Computational Study of the Space Shuttle Orbiter and Several Damaged Configurations

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Merski, N. Ronald (Technical Monitor)

    2004-01-01

    Inviscid aerodynamic characteristics of the Space Shuttle Orbiter were computed in support of the Columbia Accident Investigation. The unstructured grid software FELISA was used and computations were done using freestream conditions corresponding to those in the NASA Langley 20-Inch Mach 6 CF4 tunnel test section. The angle of attack was held constant at 40 degrees. The baseline (undamaged) configuration and a large number of damaged configurations of the Orbiter were studied. Most of the computations were done on a half model. However, one set of computations was done using the full-model to study the effect of sideslip. The differences in the aerodynamic coefficients for the damaged and the baseline configurations were computed. Simultaneously with the computation reported here, tests were being done on a scale model of the Orbiter in the 20-Inch Mach 6 CF4 tunnel to measure the deltas . The present computations complemented the CF4 tunnel test, and provided aerodynamic coefficients of the Orbiter as well as its components. Further, they also provided details of the flow field.

  17. Shared-resource computing for small research labs.

    PubMed

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  18. Advanced information processing system: Local system services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter

    1989-01-01

    The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.

  19. Eric Bonnema | NREL

    Science.gov Websites

    contributes to the research efforts for commercial buildings. This effort is dedicated to studying the , commercial sector whole-building energy simulation, scientific computing, and software configuration and

  20. Mars Science Laboratory Flight Software Boot Robustness Testing Project Report

    NASA Technical Reports Server (NTRS)

    Roth, Brian

    2011-01-01

    On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.

  1. Configurable software for satellite graphics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartzman, P D

    An important goal in interactive computer graphics is to provide users with both quick system responses for basic graphics functions and enough computing power for complex calculations. One solution is to have a distributed graphics system in which a minicomputer and a powerful large computer share the work. The most versatile type of distributed system is an intelligent satellite system in which the minicomputer is programmable by the application user and can do most of the work while the large remote machine is used for difficult computations. At New York University, the hardware was configured from available equipment. The levelmore » of system intelligence resulted almost completely from software development. Unlike previous work with intelligent satellites, the resulting system had system control centered in the satellite. It also had the ability to reconfigure software during realtime operation. The design of the system was done at a very high level using set theoretic language. The specification clearly illustrated processor boundaries and interfaces. The high-level specification also produced a compact, machine-independent virtual graphics data structure for picture representation. The software was written in a systems implementation language; thus, only one set of programs was needed for both machines. A user can program both machines in a single language. Tests of the system with an application program indicate that is has very high potential. A major result of this work is the demonstration that a gigantic investment in new hardware is not necessary for computing facilities interested in graphics.« less

  2. Cost-effective and business-beneficial computer validation for bioanalytical laboratories.

    PubMed

    McDowall, Rd

    2011-07-01

    Computerized system validation is often viewed as a burden and a waste of time to meet regulatory requirements. This article presents a different approach by looking at validation in a bioanalytical laboratory from the business benefits that computer validation can bring. Ask yourself the question, have you ever bought a computerized system that did not meet your initial expectations? This article will look at understanding the process to be automated, the paper to be eliminated and the records to be signed to meet the requirements of the GLP or GCP and Part 11 regulations. This paper will only consider commercial nonconfigurable and configurable software such as plate readers and LC-MS/MS data systems rather than LIMS or custom applications. Two streamlined life cycle models are presented. The first one consists of a single document for validation of nonconfigurable software. The second is for configurable software and is a five-stage model that avoids the need to write functional and design specifications. Both models are aimed at managing the risk each type of software poses whist reducing the amount of documented evidence required for validation.

  3. A Procedure for Measuring Latencies in Brain-Computer Interfaces

    PubMed Central

    Wilson, J. Adam; Mellinger, Jürgen; Schalk, Gerwin; Williams, Justin

    2011-01-01

    Brain-computer interface (BCI) systems must process neural signals with consistent timing in order to support adequate system performance. Thus, it is important to have the capability to determine whether a particular BCI configuration (i.e., hardware, software) provides adequate timing performance for a particular experiment. This report presents a method of measuring and quantifying different aspects of system timing in several typical BCI experiments across a range of settings, and presents comprehensive measures of expected overall system latency for each experimental configuration. PMID:20403781

  4. TH-A-12A-01: Medical Physicist's Role in Digital Information Security: Threats, Vulnerabilities and Best Practices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, K; Curran, B

    I. Information Security Background (Speaker = Kevin McDonald) Evolution of Medical Devices Living and Working in a Hostile Environment Attack Motivations Attack Vectors Simple Safety Strategies Medical Device Security in the News Medical Devices and Vendors Summary II. Keeping Radiation Oncology IT Systems Secure (Speaker = Bruce Curran) Hardware Security Double-lock Requirements “Foreign” computer systems Portable Device Encryption Patient Data Storage System Requirements Network Configuration Isolating Critical Devices Isolating Clinical Networks Remote Access Considerations Software Applications / Configuration Passwords / Screen Savers Restricted Services / access Software Configuration Restriction Use of DNS to restrict accesse. Patches / Upgrades Awareness Intrusionmore » Prevention Intrusion Detection Threat Risk Analysis Conclusion Learning Objectives: Understanding how Hospital IT Requirements affect Radiation Oncology IT Systems. Illustrating sample practices for hardware, network, and software security. Discussing implementation of good IT security practices in radiation oncology. Understand overall risk and threats scenario in a networked environment.« less

  5. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    USDA-ARS?s Scientific Manuscript database

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  6. Cost-Effectiveness of Alternative Approaches to Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Levin, Henry M.; And Others

    Operating on the premise that different approaches to computer-assisted instruction (CAI) may use different configurations of hardware and software, different curricula, and different organizational and personnel arrangements, this study explored the feasibility of collecting evaluations of CAI to evaluate the comparative cost-effectiveness of…

  7. Software platform virtualization in chemistry research and university teaching

    PubMed Central

    2009-01-01

    Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997

  8. Software platform virtualization in chemistry research and university teaching.

    PubMed

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  9. How Elected Officials Can Control Computer Costs.

    ERIC Educational Resources Information Center

    Grady, Daniel B.

    Elected officials have a special obligation to monitor and make informed decisions about computer expenditures. In doing so, officials should insist that a needs assessment be carried out; review all cost and configuration data; draw up a master plan specifying user needs as well as hardware, software, and personnel requirements; and subject…

  10. Plotit-method of interactively plotting input data for the vorlax computer program. [computerized aircraft configuration design

    NASA Technical Reports Server (NTRS)

    Denn, F. M.

    1978-01-01

    Geometric input plotting to the VORLAX computer program by means of an interactive remote terminal is reported. The software consists of a procedure file and two programs. The programs and procedure file are described and a sample execution is presented.

  11. An Inviscid Computational Study of Three '07 Mars Lander Aeroshell Configurations Over a Mach Number Range of 2.3 to 4.5

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Sutton, Kenneth (Technical Monitor)

    2001-01-01

    This report documents the results of a study conducted to compute the inviscid longitudinal aerodynamic characteristics of three aeroshell configurations of the proposed '07 Mars lander. This was done in support of the activity to design a smart lander for the proposed '07 Mars mission. In addition to the three configurations with tabs designated as the shelf, the canted, and the Ames, the baseline configuration (without tab) was also studied. The unstructured grid inviscid CFD software FELISA was used, and the longitudinal aerodynamic characteristics of the four configurations were computed for Mach number of 2.3, 2.7, 3.5, and 4.5, and for an angle of attack range of -4 to 20 degrees. Wind tunnel tests had been conducted on scale models of these four configurations in the Unitary Plan Wind Tunnel, NASA Langley Research Center. Present computational results are compared with the data from these tests. Some differences are noticed between the two results, particularly at the lower Mach numbers. These differences are attributed to the pressures acting on the aft body. Most of the present computations were done on the forebody only. Additional computations were done on the full body (forebody and afterbody) for the baseline and the Shelf configurations. Results of some computations done (to simulate flight conditions) with the Mars gas option and with an effective gamma are also included.

  12. A Software Implementation of a Satellite Interface Message Processor.

    ERIC Educational Resources Information Center

    Eastwood, Margaret A.; Eastwood, Lester F., Jr.

    A design for network control software for a computer network is described in which some nodes are linked by a communications satellite channel. It is assumed that the network has an ARPANET-like configuration; that is, that specialized processors at each node are responsible for message switching and network control. The purpose of the control…

  13. The Company Approach to Software Engineering Project Courses

    ERIC Educational Resources Information Center

    Broman, D.; Sandahl, K.; Abu Baker, M.

    2012-01-01

    Teaching larger software engineering project courses at the end of a computing curriculum is a way for students to learn some aspects of real-world jobs in industry. Such courses, often referred to as capstone courses, are effective for learning how to apply the skills they have acquired in, for example, design, test, and configuration management.…

  14. Common Database Interface for Heterogeneous Software Engineering Tools.

    DTIC Science & Technology

    1987-12-01

    SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager

  15. XpressWare Installation User guide

    NASA Astrophysics Data System (ADS)

    Duffey, K. P.

    XpressWare is a set of X terminal software, released by Tektronix Inc, that accommodates the X Window system on a range of host computers. The software comprises boot files (the X server image), configuration files, fonts, and font tools to support the X terminal. The files can be installed on one host or distributed across multiple hosts The purpose of this guide is to present the system or network administrator with a step-by-step account of how to install XpressWare, and how subsequently to configure the X terminals appropriately for the environment in which they operate.

  16. Inviscid Flow Computations of Two '07 Mars Lander Aeroshell Configurations Over a Mach Number Range of 2 to 24

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    2001-01-01

    This report documents the results of an inviscid computational study conducted on two aeroshell configurations for a proposed '07 Mars Lander. The aeroshell configurations are asymmetric due to the presence of the tabs at the maximum diameter location. The purpose of these tabs was to change the pitching moment characteristics so that the aeroshell will trim at a non-zero angle of attack and produce a lift-to-drag ratio of approximately -0.25. This is required in the guidance of the vehicle on its trajectory. One of the two configurations is called the shelf and the other is called the tab. The unstructured grid software FELISA with the equilibrium Mars gas option was used for these computations. The computations were done for six points on a preliminary trajectory of the '07 Mars Lander at nominal Mach numbers of 2, 3, 5, 10, 15, and 24. Longitudinal aerodynamic characteristics namely lift, drag, and pitching moment were computed for 10, 15, and 20 degrees angles of attack. The results indicated that the two configurations have aerodynamic characteristics that have very similar aerodynamic characteristics, and provide the desired trim LID of approximately -0.25.

  17. Apollo LM guidance computer software for the final lunar descent.

    NASA Technical Reports Server (NTRS)

    Eyles, D.

    1973-01-01

    In all manned lunar landings to date, the lunar module Commander has taken partial manual control of the spacecraft during the final stage of the descent, below roughly 500 ft altitude. This report describes programs developed at the Charles Stark Draper Laboratory, MIT, for use in the LM's guidance computer during the final descent. At this time computational demands on the on-board computer are at a maximum, and particularly close interaction with the crew is necessary. The emphasis is on the design of the computer software rather than on justification of the particular guidance algorithms employed. After the computer and the mission have been introduced, the current configuration of the final landing programs and an advanced version developed experimentally by the author are described.

  18. Software For Graphical Representation Of A Network

    NASA Technical Reports Server (NTRS)

    Mcallister, R. William; Mclellan, James P.

    1993-01-01

    System Visualization Tool (SVT) computer program developed to provide systems engineers with means of graphically representing networks. Generates diagrams illustrating structures and states of networks defined by users. Provides systems engineers powerful tool simplifing analysis of requirements and testing and maintenance of complex software-controlled systems. Employs visual models supporting analysis of chronological sequences of requirements, simulation data, and related software functions. Applied to pneumatic, hydraulic, and propellant-distribution networks. Used to define and view arbitrary configurations of such major hardware components of system as propellant tanks, valves, propellant lines, and engines. Also graphically displays status of each component. Advantage of SVT: utilizes visual cues to represent configuration of each component within network. Written in Turbo Pascal(R), version 5.0.

  19. Configuration Analysis Tool (CAT). System Description and users guide (revision 1)

    NASA Technical Reports Server (NTRS)

    Decker, W.; Taylor, W.; Mcgarry, F. E.; Merwarth, P.

    1982-01-01

    A system description of, and user's guide for, the Configuration Analysis Tool (CAT) are presented. As a configuration management tool, CAT enhances the control of large software systems by providing a repository for information describing the current status of a project. CAT provides an editing capability to update the information and a reporting capability to present the information. CAT is an interactive program available in versions for the PDP-11/70 and VAX-11/780 computers.

  20. Using the U.S. Bureau of the Census CD-ROM Test Disc 2: A Note.

    ERIC Educational Resources Information Center

    Staninger, Steven W.

    1991-01-01

    This evaluation of means of accessing data on the U.S. Census Bureau CD-ROM Test Disc 2 describes computer hardware configurations considered in determining the most efficient method of manipulating the data, and mentions software options. Two dBASE configurations are compared, and it is concluded that the 80386 is essential for efficient…

  1. HSCT4.0 Application: Software Requirements Specification

    NASA Technical Reports Server (NTRS)

    Salas, A. O.; Walsh, J. L.; Mason, B. H.; Weston, R. P.; Townsend, J. C.; Samareh, J. A.; Green, L. L.

    2001-01-01

    The software requirements for the High Performance Computing and Communication Program High Speed Civil Transport application project, referred to as HSCT4.0, are described. The objective of the HSCT4.0 application project is to demonstrate the application of high-performance computing techniques to the problem of multidisciplinary design optimization of a supersonic transport configuration, using high-fidelity analysis simulations. Descriptions of the various functions (and the relationships among them) that make up the multidisciplinary application as well as the constraints on the software design arc provided. This document serves to establish an agreement between the suppliers and the customer as to what the HSCT4.0 application should do and provides to the software developers the information necessary to design and implement the system.

  2. Design and implementation of a Windows NT network to support CNC activities

    NASA Technical Reports Server (NTRS)

    Shearrow, C. A.

    1996-01-01

    The Manufacturing, Materials, & Processes Technology Division is undergoing dramatic changes to bring it's manufacturing practices current with today's technological revolution. The Division is developing Computer Automated Design and Computer Automated Manufacturing (CAD/CAM) abilities. The development of resource tracking is underway in the form of an accounting software package called Infisy. These two efforts will bring the division into the 1980's in relationship to manufacturing processes. Computer Integrated Manufacturing (CIM) is the final phase of change to be implemented. This document is a qualitative study and application of a CIM application capable of finishing the changes necessary to bring the manufacturing practices into the 1990's. The documentation provided in this qualitative research effort includes discovery of the current status of manufacturing in the Manufacturing, Materials, & Processes Technology Division including the software, hardware, network and mode of operation. The proposed direction of research included a network design, computers to be used, software to be used, machine to computer connections, estimate a timeline for implementation, and a cost estimate. Recommendation for the division's improvement include action to be taken, software to utilize, and computer configurations.

  3. Identification of delaminations in composite: structural health monitoring software based on spectral estimation and hierarchical genetic algorithm

    NASA Astrophysics Data System (ADS)

    Nag, A.; Mahapatra, D. Roy; Gopalakrishnan, S.

    2003-10-01

    A hierarchical Genetic Algorithm (GA) is implemented in a high peformance spectral finite element software for identification of delaminations in laminated composite beams. In smart structural health monitoring, the number of delaminations (or any other modes of damage) as well as their locations and sizes are no way completely known. Only known are the healthy structural configuration (mass, stiffness and damping matrices updated from previous phases of monitoring), sensor measurements and some information about the load environment. To handle such enormous complexity, a hierarchical GA is used to represent heterogeneous population consisting of damaged structures with different number of delaminations and their evolution process to identify the correct damage configuration in the structures under monitoring. We consider this similarity with the evolution process in heterogeneous population of species in nature to develop an automated procedure to decide on what possible damaged configuration might have produced the deviation in the measured signals. Computational efficiency of the identification task is demonstrated by considering a single delamination. The behavior of fitness function in GA, which is an important factor for fast convergence, is studied for single and multiple delaminations. Several advantages of the approach in terms of computational cost is discussed. Beside tackling different other types of damage configurations, further scope of research for development of hybrid soft-computing modules are highlighted.

  4. The SOFIA Mission Control System Software

    NASA Astrophysics Data System (ADS)

    Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.

    1999-05-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.

  5. Compiling for Application Specific Computational Acceleration in Reconfigurable Architectures Final Report CRADA No. TSB-2033-01

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Supinski, B.; Caliga, D.

    2017-09-28

    The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.

  6. Virtual pools for interactive analysis and software development through an integrated Cloud environment

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.

    2011-12-01

    WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.

  7. Ground Collision Avoidance System (Igcas)

    NASA Technical Reports Server (NTRS)

    Prosser, Kevin (Inventor); Hook, Loyd (Inventor); Skoog, Mark A (Inventor)

    2017-01-01

    The present invention is a system and method for aircraft ground collision avoidance (iGCAS) comprising a modular array of software, including a sense own state module configured to gather data to compute trajectory, a sense terrain module including a digital terrain map (DTM) and map manger routine to store and retrieve terrain elevations, a predict collision threat module configured to generate an elevation profile corresponding to the terrain under the trajectory computed by said sense own state module, a predict avoidance trajectory module configured to simulate avoidance maneuvers ahead of the aircraft, a determine need to avoid module configured to determine which avoidance maneuver should be used, when it should be initiated, and when it should be terminated, a notify Module configured to display each maneuver's viability to the pilot by a colored GUI, a pilot controls module configured to turn the system on and off, and an avoid module configured to define how an aircraft will perform avoidance maneuvers through 3-dimensional space.

  8. Avoiding Defect Nucleation during Equilibration in Molecular Dynamics Simulations with ReaxFF

    DTIC Science & Technology

    2015-04-01

    respectively. All simulations are performed using the LAMMPS computer code.12 2 Fig. 1 a) Initial and b) final configurations of the molecular centers...Plimpton S. Fast parallel algorithms for short-range molecular dynamics. Comput J Phys. 1995;117:1–19. (Software available at http:// lammps .sandia.gov

  9. UFMulti: A new parallel processing software system for HEP

    NASA Astrophysics Data System (ADS)

    Avery, Paul; White, Andrew

    1989-12-01

    UFMulti is a multiprocessing software package designed for general purpose high energy physics applications, including physics and detector simulation, data reduction and DST physics analysis. The system is particularly well suited for installations where several workstation or computers are connected through a local area network (LAN). The initial configuration of the software is currently running on VAX/VMS machines with a planned extension to ULTRIX, using the new RISC CPUs from Digital, in the near future.

  10. Space shuttle configuration accounting functional design specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.

  11. Enhanced Data Authentication System v. 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Maikael A.; Tolsch, Brandon Jeffrey; Schwartz, Steven Robert

    EDAS is a system, comprised on hardware and software, that plugs in to an existing data stream, and branches all data for transmission to a secondary observer computer. The EDAS Junction box, which inserts into the data stream, has Java software that forms these data into packets, digitally signs, encrypts, and sends these packets to a safeguards inspector computer. Further, there is a second Java program running on the secondary observer computer that receives data from the EDAS Junction Box to decrypt, authenticate, and store incoming packets. Also, there is a stand-alone Java program that is used to configure themore » EDAS Junction Box.« less

  12. Cloud Computing with iPlant Atmosphere.

    PubMed

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-10-15

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.

  13. SNS programming environment user's guide

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.

    1992-01-01

    The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.

  14. Security Risks of Cloud Computing and Its Emergence as 5th Utility Service

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    Cloud Computing is being projected by the major cloud services provider IT companies such as IBM, Google, Yahoo, Amazon and others as fifth utility where clients will have access for processing those applications and or software projects which need very high processing speed for compute intensive and huge data capacity for scientific, engineering research problems and also e- business and data content network applications. These services for different types of clients are provided under DASM-Direct Access Service Management based on virtualization of hardware, software and very high bandwidth Internet (Web 2.0) communication. The paper reviews these developments for Cloud Computing and Hardware/Software configuration of the cloud paradigm. The paper also examines the vital aspects of security risks projected by IT Industry experts, cloud clients. The paper also highlights the cloud provider's response to cloud security risks.

  15. Multiplexer/Demultiplexer Loading Tool (MDMLT)

    NASA Technical Reports Server (NTRS)

    Brewer, Lenox Allen; Hale, Elizabeth; Martella, Robert; Gyorfi, Ryan

    2012-01-01

    The purpose of the MDMLT is to improve the reliability and speed of loading multiplexers/demultiplexers (MDMs) in the Software Development and Integration Laboratory (SDIL) by automating the configuration management (CM) of the loads in the MDMs, automating the loading procedure, and providing the capability to load multiple or all MDMs concurrently. This loading may be accomplished in parallel, or single MDMs (remote). The MDMLT is a Web-based tool that is capable of loading the entire International Space Station (ISS) MDM configuration in parallel. It is able to load Flight Equivalent Units (FEUs), enhanced, standard, and prototype MDMs as well as both EEPROM (Electrically Erasable Programmable Read-Only Memory) and SSMMU (Solid State Mass Memory Unit) (MASS Memory). This software has extensive configuration management to track loading history, and the performance improvement means of loading the entire ISS MDM configuration of 49 MDMs in approximately 30 minutes, as opposed to 36 hours, which is what it took previously utilizing the flight method of S-Band uplink. The laptop version recently added to the MDMLT suite allows remote lab loading with the CM of information entered into a common database when it is reconnected to the network. This allows the program to reconfigure the test rigs quickly between shifts, allowing the lab to support a variety of onboard configurations during a single day, based on upcoming or current missions. The MDMLT Computer Software Configuration Item (CSCI) supports a Web-based command and control interface to the user. An interface to the SDIL File Transfer Protocol (FTP) server is supported to import Integrated Flight Loads (IFLs) and Internal Product Release Notes (IPRNs) into the database. An interface to the Monitor and Control System (MCS) is supported to control the power state, and to enable or disable the debug port of the MDMs to be loaded. Two direct interfaces to the MDM are supported: a serial interface (debug port) to receive MDM memory dump data and the calculated checksum, and the Small Computer System Interface (SCSI) to transfer load files to MDMs with hard disks. File transfer from the MDM Loading Tool to EEPROM within the MDM is performed via the MILSTD- 1553 bus, making use of the Real- Time Input/Output Processors (RTIOP) when using the rig-based MDMLT, and via a bus box when using the laptop MDMLT. The bus box is a cost-effective alternative to PC-1553 cards for the laptop. It is noted that this system can be modified and adapted to any avionic laboratory for spacecraft computer loading, ship avionics, or aircraft avionics where multiple configurations and strong configuration management of software/firmware loads are required.

  16. Modular Analytical Multicomponent Analysis in Gas Sensor Aarrays

    PubMed Central

    Chaiyboun, Ali; Traute, Rüdiger; Kiesewetter, Olaf; Ahlers, Simon; Müller, Gerhard; Doll, Theodor

    2006-01-01

    A multi-sensor system is a chemical sensor system which quantitatively and qualitatively records gases with a combination of cross-sensitive gas sensor arrays and pattern recognition software. This paper addresses the issue of data analysis for identification of gases in a gas sensor array. We introduce a software tool for gas sensor array configuration and simulation. It concerns thereby about a modular software package for the acquisition of data of different sensors. A signal evaluation algorithm referred to as matrix method was used specifically for the software tool. This matrix method computes the gas concentrations from the signals of a sensor array. The software tool was used for the simulation of an array of five sensors to determine gas concentration of CH4, NH3, H2, CO and C2H5OH. The results of the present simulated sensor array indicate that the software tool is capable of the following: (a) identify a gas independently of its concentration; (b) estimate the concentration of the gas, even if the system was not previously exposed to this concentration; (c) tell when a gas concentration exceeds a certain value. A gas sensor data base was build for the configuration of the software. With the data base one can create, generate and manage scenarios and source files for the simulation. With the gas sensor data base and the simulation software an on-line Web-based version was developed, with which the user can configure and simulate sensor arrays on-line.

  17. Using Open Source Software in Visual Simulation Development

    DTIC Science & Technology

    2005-09-01

    increased the use of the technology in training activities. Using open source/free software tools in the process can expand these possibilities...resulting in even greater cost reduction and allowing the flexibility needed in a training environment. This thesis presents a configuration and architecture...to be used when developing training visual simulations using both personal computers and open source tools. Aspects of the requirements needed in a

  18. Flexible and Secure Computer-Based Assessment Using a Single Zip Disk

    ERIC Educational Resources Information Center

    Ko, C. C.; Cheng, C. D.

    2008-01-01

    Electronic examination systems, which include Internet-based system, require extremely complicated installation, configuration and maintenance of software as well as hardware. In this paper, we present the design and development of a flexible, easy-to-use and secure examination system (e-Test), in which any commonly used computer can be used as a…

  19. Lessons learned in creating spacecraft computer systems: Implications for using Ada (R) for the space station

    NASA Technical Reports Server (NTRS)

    Tomayko, James E.

    1986-01-01

    Twenty-five years of spacecraft onboard computer development have resulted in a better understanding of the requirements for effective, efficient, and fault tolerant flight computer systems. Lessons from eight flight programs (Gemini, Apollo, Skylab, Shuttle, Mariner, Voyager, and Galileo) and three reserach programs (digital fly-by-wire, STAR, and the Unified Data System) are useful in projecting the computer hardware configuration of the Space Station and the ways in which the Ada programming language will enhance the development of the necessary software. The evolution of hardware technology, fault protection methods, and software architectures used in space flight in order to provide insight into the pending development of such items for the Space Station are reviewed.

  20. Inspection design using 2D phased array, TFM and cueMAP software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGilp, Ailidh; Dziewierz, Jerzy; Lardner, Tim

    2014-02-18

    A simulation suite, cueMAP, has been developed to facilitate the design of inspection processes and sparse 2D array configurations. At the core of cueMAP is a Total Focusing Method (TFM) imaging algorithm that enables computer assisted design of ultrasonic inspection scenarios, including the design of bespoke array configurations to match the inspection criteria. This in-house developed TFM code allows for interactive evaluation of image quality indicators of ultrasonic imaging performance when utilizing a 2D phased array working in FMC/TFM mode. The cueMAP software uses a series of TFM images to build a map of resolution, contrast and sensitivity of imagingmore » performance of a simulated reflector, swept across the inspection volume. The software takes into account probe properties, wedge or water standoff, and effects of specimen curvature. In the validation process of this new software package, two 2D arrays have been evaluated on 304n stainless steel samples, typical of the primary circuit in nuclear plants. Thick section samples have been inspected using a 1MHz 2D matrix array. Due to the processing efficiency of the software, the data collected from these array configurations has been used to investigate the influence sub-aperture operation on inspection performance.« less

  1. A Validation Framework for the Long Term Preservation of High Energy Physics Data

    NASA Astrophysics Data System (ADS)

    Ozerov, Dmitri; South, David M.

    2014-06-01

    The study group on data preservation in high energy physics, DPHEP, is moving to a new collaboration structure, which will focus on the implementation of preservation projects, such as those described in the group's large scale report published in 2012. One such project is the development of a validation framework, which checks the compatibility of evolving computing environments and technologies with the experiments software for as long as possible, with the aim of substantially extending the lifetime of the analysis software, and hence of the usability of the data. The framework is designed to automatically test and validate the software and data of an experiment against changes and upgrades to the computing environment, as well as changes to the experiment software itself. Technically, this is realised using a framework capable of hosting a number of virtual machine images, built with different configurations of operating systems and the relevant software, including any necessary external dependencies.

  2. Design and reliability analysis of DP-3 dynamic positioning control architecture

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru

    2011-12-01

    As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.

  3. SIRU development. Volume 3: Software description and program documentation

    NASA Technical Reports Server (NTRS)

    Oehrle, J.

    1973-01-01

    The development and initial evaluation of a strapdown inertial reference unit (SIRU) system are discussed. The SIRU configuration is a modular inertial subsystem with hardware and software features that achieve fault tolerant operational capabilities. The SIRU redundant hardware design is formulated about a six gyro and six accelerometer instrument module package. The six axes array provides redundant independent sensing and the symmetry enables the formulation of an optimal software redundant data processing structure with self-contained fault detection and isolation (FDI) capabilities. The basic SIRU software coding system used in the DDP-516 computer is documented.

  4. Man-rated flight software for the F-8 DFBW program

    NASA Technical Reports Server (NTRS)

    Bairnsfather, R. R.

    1976-01-01

    The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program assembly control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools are described, as well as the program test plans and their implementation on the various simulators. Failure effects analysis and the creation of special failure generating software for testing purposes are described.

  5. Partitioning Strategy Using Static Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Seo, Yongjin; Soo Kim, Hyeon

    2016-08-01

    Flight software is software used in satellites' on-board computers. It has requirements such as real time and reliability. The IMA architecture is used to satisfy these requirements. The IMA architecture has the concept of partitions and this affected the configuration of flight software. That is, situations occurred in which software that had been loaded on one system was divided into many partitions when being loaded. For new issues, existing studies use experience based partitioning methods. However, these methods have a problem that they cannot be reused. In this respect, this paper proposes a partitioning method that is reusable and consistent.

  6. Application research of Ganglia in Hadoop monitoring and management

    NASA Astrophysics Data System (ADS)

    Li, Gang; Ding, Jing; Zhou, Lixia; Yang, Yi; Liu, Lei; Wang, Xiaolei

    2017-03-01

    There are many applications of Hadoop System in the field of large data, cloud computing. The test bench of storage and application in seismic network at Earthquake Administration of Tianjin use with Hadoop system, which is used the open source software of Ganglia to operate and monitor. This paper reviews the function, installation and configuration process, application effect of operating and monitoring in Hadoop system of the Ganglia system. It briefly introduces the idea and effect of Nagios software monitoring Hadoop system. It is valuable for the industry in the monitoring system of cloud computing platform.

  7. A Virtual Astronomical Research Machine in No Time (VARMiNT)

    NASA Astrophysics Data System (ADS)

    Beaver, John

    2012-05-01

    We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.

  8. Software for Testing Electroactive Structural Components

    NASA Technical Reports Server (NTRS)

    Moses, Robert W.; Fox, Robert L.; Dimery, Archie D.; Bryant, Robert G.; Shams, Qamar

    2003-01-01

    A computer program generates a graphical user interface that, in combination with its other features, facilitates the acquisition and preprocessing of experimental data on the strain response, hysteresis, and power consumption of a multilayer composite-material structural component containing one or more built-in sensor(s) and/or actuator(s) based on piezoelectric materials. This program runs in conjunction with Lab-VIEW software in a computer-controlled instrumentation system. For a test, a specimen is instrumented with appliedvoltage and current sensors and with strain gauges. Once the computational connection to the test setup has been made via the LabVIEW software, this program causes the test instrumentation to step through specified configurations. If the user is satisfied with the test results as displayed by the software, the user activates an icon on a front-panel display, causing the raw current, voltage, and strain data to be digitized and saved. The data are also put into a spreadsheet and can be plotted on a graph. Graphical displays are saved in an image file for future reference. The program also computes and displays the power and the phase angle between voltage and current.

  9. How to Create, Modify, and Interface Aspen In-House and User Databanks for System Configuration 1:

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camp, D W

    2000-10-27

    The goal of this document is to provide detailed instructions to create, modify, interface, and test Aspen User and In-House databanks with minimal frustration. The level of instructions are aimed at a novice Aspen Plus simulation user who is neither a programming nor computer-system expert. The instructions are tailored to Version 10.1 of Aspen Plus and the specific computing configuration summarized in the Title of this document and detailed in Section 2. Many details of setting up databanks depend on the computing environment specifics, such as the machines, operating systems, command languages, directory structures, inter-computer communications software, the version ofmore » the Aspen Engine and Graphical User Interface (GUI), and the directory structure of how these were installed.« less

  10. Advanced Communications Technology Satellite high burst rate link evaluation terminal experiment control and monitor software maintenance manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1992-01-01

    The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document. The EC&M Software Maintenance Manual, Version 1.0 (NASA-CR-189161) is a programmer's guide that describes current implementation of the EC&M software from a technical perspective. An overview of the EC&M software, computer algorithms, format representation, and computer hardware configuration are included in the manual.

  11. The Computational Infrastructure for Geodynamics as a Community of Practice

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Kellogg, L. H.

    2016-12-01

    Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.

  12. Integrated computer-aided design using minicomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  13. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust configuration is based on cloud computing and allows the installation on a private or public cloud infrastructure. In this configuration, the processing resources can be dynamically allocated and the execution time can be considerably improved by the available virtual resources and the number of parallelizable sequences in the processing flow. The presentation highlights the benefits and issues of the proposed solution by analyzing some significant experimental use cases. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Constantin Nandra, Dorian Gorgan: "Defining Earth data batch processing tasks by means of a flexible workflow description language", ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-4, 59-66, (2016). [3] Victor Bacu, Teodor Stefanut, Dorian Gorgan, "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).

  14. A learning apprentice for software parts composition

    NASA Technical Reports Server (NTRS)

    Allen, Bradley P.; Holtzman, Peter L.

    1987-01-01

    An overview of the knowledge acquisition component of the Bauhaus, a prototype computer aided software engineering (CASE) workstation for the development of domain-specific automatic programming systems (D-SAPS) is given. D-SAPS use domain knowledge in the refinement of a description of an application program into a compilable implementation. The approach to the construction of D-SAPS was to automate the process of refining a description of a program, expressed in an object-oriented domain language, into a configuration of software parts that implement the behavior of the domain objects.

  15. Development of Labview based data acquisition and multichannel analyzer software for radioactive particle tracking system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, Nur Aira Abd, E-mail: nur-aira@nuclearmalaysia.gov.my; Yussup, Nolida; Ibrahim, Maslina Bt. Mohd

    2015-04-29

    A DAQ (data acquisition) software called RPTv2.0 has been developed for Radioactive Particle Tracking System in Malaysian Nuclear Agency. RPTv2.0 that features scanning control GUI, data acquisition from 12-channel counter via RS-232 interface, and multichannel analyzer (MCA). This software is fully developed on National Instruments Labview 8.6 platform. Ludlum Model 4612 Counter is used to count the signals from the scintillation detectors while a host computer is used to send control parameters, acquire and display data, and compute results. Each detector channel consists of independent high voltage control, threshold or sensitivity value and window settings. The counter is configured withmore » a host board and twelve slave boards. The host board collects the counts from each slave board and communicates with the computer via RS-232 data interface.« less

  16. Processing EOS MLS Level-2 Data

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van; Wu, Dong; Read, William; Jiang, Jonathan; Wagner, Paul; Livesey, Nathaniel; Schwartz, Michael; Filipiak, Mark; Pumphrey, Hugh; Shippony, Zvi

    2006-01-01

    A computer program performs level-2 processing of thermal-microwave-radiance data from observations of the limb of the Earth by the Earth Observing System (EOS) Microwave Limb Sounder (MLS). The purpose of the processing is to estimate the composition and temperature of the atmosphere versus altitude from .8 to .90 km. "Level-2" as used here is a specialists f term signifying both vertical profiles of geophysical parameters along the measurement track of the instrument and processing performed by this or other software to generate such profiles. Designed to be flexible, the program is controlled via a configuration file that defines all aspects of processing, including contents of state and measurement vectors, configurations of forward models, measurement and calibration data to be read, and the manner of inverting the models to obtain the desired estimates. The program can operate in a parallel form in which one instance of the program acts a master, coordinating the work of multiple slave instances on a cluster of computers, each slave operating on a portion of the data. Optionally, the configuration file can be made to instruct the software to produce files of simulated radiances based on state vectors formed from sets of geophysical data-product files taken as input.

  17. Detection of endoscopic looping during colonoscopy procedure by using embedded bending sensors

    PubMed Central

    Bruce, Michael; Choi, JungHun

    2018-01-01

    Background Looping of the colonoscope shaft during procedure is one of the most common obstacles encountered by colonoscopists. It occurs in 91% of cases with the N-sigmoid loop being the most common, occurring in 79% of cases. Purpose Herein, a novel system is developed that will give a complete three-dimensional (3D) vector image of the shaft as it passes through the colon, to aid the colonoscopist in detecting loops before they form. Patients and methods A series of connected links spans the middle 50% of the shaft, where loops are likely to form. Two potentiometers are attached at each joint to measure angular deflection in two directions to allow for 3D positioning. This 3D positioning is converted into a 3D vector image using computer software. MATLAB software has been used to display the image on a computer monitor. For the different configuration of the colon model, the system determined the looping status. Results Different configurations (N loop, reverse gamma loop, and reverse splenic flexure) of the loops were well defined using 3D vector image. Conclusion The novel sensory system can accurately define the various configuration of the colon during the colonoscopy procedure. PMID:29849469

  18. SEPAC software configuration control plan and procedures, revision 1

    NASA Technical Reports Server (NTRS)

    1981-01-01

    SEPAC Software Configuration Control Plan and Procedures are presented. The objective of the software configuration control is to establish the process for maintaining configuration control of the SEPAC software beginning with the baselining of SEPAC Flight Software Version 1 and encompass the integration and verification tests through Spacelab Level IV Integration. They are designed to provide a simplified but complete configuration control process. The intent is to require a minimum amount of paperwork but provide total traceability of SEPAC software.

  19. Computer Program Development Specification for IDAMST Operational Flight Programs. Addendum 1. Executive Software.

    DTIC Science & Technology

    1976-11-01

    system. b. Read different program configurations to reconfigure the software during flight. c. Write Digital Integrated Test System (DITS) results...associated witn > inor C):l.e Event must be Unlatched. The sole difference between a Latched ana an lnratcrec Condition is that upon the Scheduling...Table. Furthermore, the block of pointers for one Minor Cycle may be wholly contained witnir the Diock of ocinters for a different Minor Cycle. For

  20. Design requirements for SRB production control system. Volume 1: Study background and overview

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The solid rocket boosters assembly environment is described in terms of the contraints it places upon an automated production control system. The business system generated for the SRB assembly and the computer system which meets the business system requirements are described. The selection software process and modifications required to the recommended software are addressed as well as the hardware and configuration requirements necessary to support the system.

  1. (Quickly) Testing the Tester via Path Coverage

    NASA Technical Reports Server (NTRS)

    Groce, Alex

    2009-01-01

    The configuration complexity and code size of an automated testing framework may grow to a point that the tester itself becomes a significant software artifact, prone to poor configuration and implementation errors. Unfortunately, testing the tester by using old versions of the software under test (SUT) may be impractical or impossible: test framework changes may have been motivated by interface changes in the tested system, or fault detection may become too expensive in terms of computing time to justify running until errors are detected on older versions of the software. We propose the use of path coverage measures as a "quick and dirty" method for detecting many faults in complex test frameworks. We also note the possibility of using techniques developed to diversify state-space searches in model checking to diversify test focus, and an associated classification of tester changes into focus-changing and non-focus-changing modifications.

  2. Solving Equations of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Lim, Christopher

    2007-01-01

    Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.

  3. Software solutions manage the definition, operation, maintenance and configuration control of the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobson, D; Churby, A; Krieger, E

    2011-07-25

    The National Ignition Facility (NIF) is the world's largest laser composed of millions of individual parts brought together to form one massive assembly. Maintaining control of the physical definition, status and configuration of this structure is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. The NIF business application suite of software provides the means to effectively manage the definition, build, operation, maintenance and configuration control of all components of the National Ignition Facility. State of the art Computer Aided Design software applications are used to generate a virtualmore » model and assemblies. Engineering bills of material are controlled through the Enterprise Configuration Management System. This data structure is passed to the Enterprise Resource Planning system to create a manufacturing bill of material. Specific parts are serialized then tracked along their entire lifecycle providing visibility to the location and status of optical, target and diagnostic components that are key to assessing pre-shot machine readiness. Nearly forty thousand items requiring preventive, reactive and calibration maintenance are tracked through the System Maintenance & Reliability Tracking application to ensure proper operation. Radiological tracking applications ensure proper stewardship of radiological and hazardous materials and help provide a safe working environment for NIF personnel.« less

  4. Numerical systems on a minicomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Jr., Roy Leonard

    1973-02-01

    This thesis defines the concept of a numerical system for a minicomputer and provides a description of the software and computer system configuration necessary to implement such a system. A procedure for creating a numerical system from a FORTRAN program is developed and an example is presented.

  5. A qualitative analysis of bus simulator training on transit incidents : a case study in Florida. [Summary].

    DOT National Transportation Integrated Search

    2013-01-01

    The simulator was once a very expensive, large-scale mechanical device for training military pilots or astronauts. Modern computers, linking sophisticated software and large-screen displays, have yielded simulators for the desktop or configured as sm...

  6. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  7. Software Defined Networking challenges and future direction: A case study of implementing SDN features on OpenStack private cloud

    NASA Astrophysics Data System (ADS)

    Shamugam, Veeramani; Murray, I.; Leong, J. A.; Sidhu, Amandeep S.

    2016-03-01

    Cloud computing provides services on demand instantly, such as access to network infrastructure consisting of computing hardware, operating systems, network storage, database and applications. Network usage and demands are growing at a very fast rate and to meet the current requirements, there is a need for automatic infrastructure scaling. Traditional networks are difficult to automate because of the distributed nature of their decision making process for switching or routing which are collocated on the same device. Managing complex environments using traditional networks is time-consuming and expensive, especially in the case of generating virtual machines, migration and network configuration. To mitigate the challenges, network operations require efficient, flexible, agile and scalable software defined networks (SDN). This paper discuss various issues in SDN and suggests how to mitigate the network management related issues. A private cloud prototype test bed was setup to implement the SDN on the OpenStack platform to test and evaluate the various network performances provided by the various configurations.

  8. A test matrix sequencer for research test facility automation

    NASA Technical Reports Server (NTRS)

    Mccartney, Timothy P.; Emery, Edward F.

    1990-01-01

    The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.

  9. Real-time autocorrelator for fluorescence correlation spectroscopy based on graphical-processor-unit architecture: method, implementation, and comparative studies

    NASA Astrophysics Data System (ADS)

    Laracuente, Nicholas; Grossman, Carl

    2013-03-01

    We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College

  10. The NASA computer aided design and test system

    NASA Technical Reports Server (NTRS)

    Gould, J. M.; Juergensen, K.

    1973-01-01

    A family of computer programs facilitating the design, layout, evaluation, and testing of digital electronic circuitry is described. CADAT (computer aided design and test system) is intended for use by NASA and its contractors and is aimed predominantly at providing cost effective microelectronic subsystems based on custom designed metal oxide semiconductor (MOS) large scale integrated circuits (LSIC's). CADAT software can be easily adopted by installations with a wide variety of computer hardware configurations. Its structure permits ease of update to more powerful component programs and to newly emerging LSIC technologies. The components of the CADAT system are described stressing the interaction of programs rather than detail of coding or algorithms. The CADAT system provides computer aids to derive and document the design intent, includes powerful automatic layout software, permits detailed geometry checks and performance simulation based on mask data, and furnishes test pattern sequences for hardware testing.

  11. Software design to calculate and simulate the mechanical response of electromechanical lifts

    NASA Astrophysics Data System (ADS)

    Herrera, I.; Romero, E.

    2016-05-01

    Lift engineers and lift companies which are involved in the design process of new products or in the research and development of improved components demand a predictive tool of the lift slender system response before testing expensive prototypes. A method for solving the movement of any specified lift system by means of a computer program is presented. The mechanical response of the lift operating in a user defined installation and configuration, for a given excitation and other configuration parameters of real electric motors and its control system, is derived. A mechanical model with 6 degrees of freedom is used. The governing equations are integrated step by step through the Meden-Kutta algorithm in the MATLAB platform. Input data consists on the set point speed for a standard trip and the control parameters of a number of controllers and lift drive machines. The computer program computes and plots very accurately the vertical displacement, velocity, instantaneous acceleration and jerk time histories of the car, counterweight, frame, passengers/loads and lift drive in a standard trip between any two floors of the desired installation. The resulting torque, rope tension and deviation of the velocity plot with respect to the setpoint speed are shown. The software design is implemented in a demo release of the computer program called ElevaCAD. Further on, the program offers the possibility to select the configuration of the lift system and the performance parameters of each component. In addition to the overall system response, detailed information of transients, vibrations of the lift components, ride quality levels, modal analysis and frequency spectrum (FFT) are plotted.

  12. Computer control of a scanning electron microscope for digital image processing of thermal-wave images

    NASA Technical Reports Server (NTRS)

    Gilbert, Percy; Jones, Robert E.; Kramarchuk, Ihor; Williams, Wallace D.; Pouch, John J.

    1987-01-01

    Using a recently developed technology called thermal-wave microscopy, NASA Lewis Research Center has developed a computer controlled submicron thermal-wave microscope for the purpose of investigating III-V compound semiconductor devices and materials. This paper describes the system's design and configuration and discusses the hardware and software capabilities. Knowledge of the Concurrent 3200 series computers is needed for a complete understanding of the material presented. However, concepts and procedures are of general interest.

  13. Rapid Airplane Parametric Input Design(RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.

    2004-01-01

    An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.

  14. Evaluation of a CFD Method for Aerodynamic Database Development using the Hyper-X Stack Configuration

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh; Engelund, Walter; Armand, Sasan; Bittner, Robert

    2004-01-01

    A computational fluid dynamic (CFD) study is performed on the Hyper-X (X-43A) Launch Vehicle stack configuration in support of the aerodynamic database generation in the transonic to hypersonic flow regime. The main aim of the study is the evaluation of a CFD method that can be used to support aerodynamic database development for similar future configurations. The CFD method uses the NASA Langley Research Center developed TetrUSS software, which is based on tetrahedral, unstructured grids. The Navier-Stokes computational method is first evaluated against a set of wind tunnel test data to gain confidence in the code s application to hypersonic Mach number flows. The evaluation includes comparison of the longitudinal stability derivatives on the complete stack configuration (which includes the X-43A/Hyper-X Research Vehicle, the launch vehicle and an adapter connecting the two), detailed surface pressure distributions at selected locations on the stack body and component (rudder, elevons) forces and moments. The CFD method is further used to predict the stack aerodynamic performance at flow conditions where no experimental data is available as well as for component loads for mechanical design and aero-elastic analyses. An excellent match between the computed and the test data over a range of flow conditions provides a computational tool that may be used for future similar hypersonic configurations with confidence.

  15. Managing computer-controlled operations

    NASA Technical Reports Server (NTRS)

    Plowden, J. B.

    1985-01-01

    A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.

  16. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  17. Development of digital interactive processing system for NOAA satellites AVHRR data

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Murthy, N. N.

    The paper discusses the digital image processing system for NOAA/AVHRR data including Land applications - configured around VAX 11/750 host computer supported with FPS 100 Array Processor, Comtal graphic display and HP Plotting devices; wherein the system software for relational Data Base together with query and editing facilities, Man-Machine Interface using form, menu and prompt inputs including validation of user entries for data type and range; preprocessing software for data calibration, Sun-angle correction, Geometric Corrections for Earth curvature effect and Earth rotation offsets and Earth location of AVHRR image have been accomplished. The implemented image enhancement techniques such as grey level stretching, histogram equalization and convolution are discussed. The software implementation details for the computation of vegetative index and normalized vegetative index using NOAA/AVHRR channels 1 and 2 data together with output are presented; scientific background for such computations and obtainability of similar indices from Landsat/MSS data are also included. The paper concludes by specifying the further software developments planned and the progress envisaged in the field of vegetation index studies.

  18. Experience with Ada on the F-18 High Alpha Research Vehicle Flight Test Program

    NASA Technical Reports Server (NTRS)

    Regenie, Victoria A.; Earls, Michael; Le, Jeanette; Thomson, Michael

    1992-01-01

    Considerable experience was acquired with Ada at the NASA Dryden Flight Research Facility during the on-going High Alpha Technology Program. In this program, an F-18 aircraft was highly modified by the addition of thrust-vectoring vanes to the airframe. In addition, substantial alteration was made in the original quadruplex flight control system. The result is the High Alpha Research Vehicle. An additional research flight control computer was incorporated in each of the four channels. Software for the research flight control computer was written in Ada. To date, six releases of this software have been flown. This paper provides a detailed description of the modifications to the research flight control system. Efficient ground-testing of the software was accomplished by using simulations that used the Ada for portions of their software. These simulations are also described. Modifying and transferring the Ada for flight software to the software simulation configuration has allowed evaluation of this language. This paper also discusses such significant issues in using Ada as portability, modifiability, and testability as well as documentation requirements.

  19. Experience with Ada on the F-18 High Alpha Research Vehicle flight test program

    NASA Technical Reports Server (NTRS)

    Regenie, Victoria A.; Earls, Michael; Le, Jeanette; Thomson, Michael

    1994-01-01

    Considerable experience has been acquired with Ada at the NASA Dryden Flight Research Facility during the on-going High Alpha Technology Program. In this program, an F-18 aircraft has been highly modified by the addition of thrust-vectoring vanes to the airframe. In addition, substantial alteration was made in the original quadruplex flight control system. The result is the High Alpha Research Vehicle. An additional research flight control computer was incorporated in each of the four channels. Software for the research flight control computer was written Ada. To date, six releases of this software have been flown. This paper provides a detailed description of the modifications to the research flight control system. Efficient ground-testing of the software was accomplished by using simulations that used the Ada for portions of their software. These simulations are also described. Modifying and transferring the Ada flight software to the software simulation configuration has allowed evaluation of this language. This paper also discusses such significant issues in using Ada as portability, modifiability, and testability as well as documentation requirements.

  20. Computer aided system engineering for space construction

    NASA Technical Reports Server (NTRS)

    Racheli, Ugo

    1989-01-01

    This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.

  1. "One-Stop Shopping" for Ocean Remote-Sensing and Model Data

    NASA Technical Reports Server (NTRS)

    Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook

    2006-01-01

    OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for

  2. General aviation design synthesis utilizing interactive computer graphics

    NASA Technical Reports Server (NTRS)

    Galloway, T. L.; Smith, M. R.

    1976-01-01

    Interactive computer graphics is a fast growing area of computer application, due to such factors as substantial cost reductions in hardware, general availability of software, and expanded data communication networks. In addition to allowing faster and more meaningful input/output, computer graphics permits the use of data in graphic form to carry out parametric studies for configuration selection and for assessing the impact of advanced technologies on general aviation designs. The incorporation of interactive computer graphics into a NASA developed general aviation synthesis program is described, and the potential uses of the synthesis program in preliminary design are demonstrated.

  3. CARMA: Software for continuous affect rating and media annotation

    PubMed Central

    Girard, Jeffrey M

    2017-01-01

    CARMA is a media annotation program that collects continuous ratings while displaying audio and video files. It is designed to be highly user-friendly and easily customizable. Based on Gottman and Levenson's affect rating dial, CARMA enables researchers and study participants to provide moment-by-moment ratings of multimedia files using a computer mouse or keyboard. The rating scale can be configured on a number of parameters including the labels for its upper and lower bounds, its numerical range, and its visual representation. Annotations can be displayed alongside the multimedia file and saved for easy import into statistical analysis software. CARMA provides a tool for researchers in affective computing, human-computer interaction, and the social sciences who need to capture the unfolding of subjective experience and observable behavior over time. PMID:29308198

  4. Shuttle avionics and the goal language including the impact of error detection and redundancy management

    NASA Technical Reports Server (NTRS)

    Flanders, J. H.; Helmers, C. T.; Stanten, S. F.

    1973-01-01

    The relationship is examined between the space shuttle onboard avionics and the ground test computer language GOAL when used in the onboard computers. The study is aimed at providing system analysis support to the feasibility analysis of a GOAL to HAL translator, where HAL is the language used to program the onboard computers for flight. The subject is dealt with in three aspects. First, the system configuration at checkout, the general checkout and launch sequences, and the inventory of subsystems are described. Secondly, the hierarchic organization of onboard software and different ways of introducing GOAL-derived software onboard are described. Also the flow of commands and test data during checkout is diagrammed. Finally, possible impact of error detection and redundancy management on the GOAL language is discussed.

  5. Predictive Software Cost Model Study. Volume II. Software Package Detailed Data.

    DTIC Science & Technology

    1980-06-01

    will not be limited to: a. ASN-91 NWDS Computer b. Armament System Control Unit ( ASCU ) c. AN/ASN-90 IMS 6. CONFIGURATION CONTROL. OFP/OTP...planned approach. 3. Detailed analysis and study; impacts on hardware, manuals, data, AGE , etc; alternatives with pros and cons; cost estimates; ECP...WAIT UNTIL RESOURCE REQUEST FOR * : HAG TAPE HAS BEEN FULFILLED )MTS 0 RI * Ae* NESDIIRCE MAG TAPE (SHORT FORM)I:TST IN I" . TEST " AG TAPE RESOURCE

  6. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  7. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  8. Super Cooled Large Droplet Analysis of Several Geometries Using LEWICE3D Version 3

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.

    2011-01-01

    Super Cooled Large Droplet (SLD) collection efficiency calculations were performed for several geometries using the LEWICE3D Version 3 software. The computations were performed using the NASA Glenn Research Center SLD splashing model which has been incorporated into the LEWICE3D Version 3 software. Comparisons to experiment were made where available. The geometries included two straight wings, a swept 64A008 wing tip, two high lift geometries, and the generic commercial transport DLR-F4 wing body configuration. In general the LEWICE3D Version 3 computations compared well with the 2D LEWICE 3.2.2 results and with experimental data where available.

  9. Computing Systems Configuration for Highly Integrated Guidance and Control Systems

    DTIC Science & Technology

    1988-06-01

    conmmunication ear lea imlustrielaiservenant dais an projet. Cela eat renda , possible entre auies par l’adoption dene mibodologie do travai coammune, par...computed graph results to data processors for post processing, or commnicating with system I/O modules. The ESU PI- Bus interface logic includes extra ...the extra constraint checking helps to find more problems at compile time), and it is especially well- suited for large software systems written by a

  10. Contributing opportunistic resources to the grid with HTCondor-CE-Bosco

    NASA Astrophysics Data System (ADS)

    Weitzel, Derek; Bockelman, Brian

    2017-10-01

    The HTCondor-CE [1] is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install, configure, and maintain the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node, updating several configuration files, and implementing grid authentication mechanisms. On the other hand, accessing remote clusters from personal computers has been dramatically improved with Bosco: site admins only need to setup SSH public key authentication and appropriate accounts on a login host. In this paper, we take a new approach with the HTCondor-CE-Bosco, a CE which combines the flexibility and reliability of the HTCondor-CE with the easy-to-install Bosco. The administrators of the opportunistic resource are not required to install any software: only SSH access and a user account are required from the host site. The OSG can then run the grid-specific portions from a central location. This provides a new, more centralized, model for running grid services, which complements the traditional distributed model. We will show the architecture of a HTCondor-CE-Bosco enabled site, as well as feedback from multiple sites that have deployed it.

  11. Software Management for the NOνAExperiment

    NASA Astrophysics Data System (ADS)

    Davies, G. S.; Davies, J. P.; C Group; Rebel, B.; Sachdev, K.; Zirnstein, J.

    2015-12-01

    The NOvAsoftware (NOνASoft) is written in C++, and built on the Fermilab Computing Division's art framework that uses ROOT analysis software. NOνASoftmakes use of more than 50 external software packages, is developed by more than 50 developers and is used by more than 100 physicists from over 30 universities and laboratories in 3 continents. The software builds are handled by Fermilab's custom version of Software Release Tools (SRT), a UNIX based software management system for large, collaborative projects that is used by several experiments at Fermilab. The system provides software version control with SVN configured in a client-server mode and is based on the code originally developed by the BaBar collaboration. In this paper, we present efforts towards distributing the NOvA software via the CernVM File System distributed file system. We will also describe our recent work to use a CMake build system and Jenkins, the open source continuous integration system, for NOνASoft.

  12. Telescience Support Center Data System Software

    NASA Technical Reports Server (NTRS)

    Rahman, Hasan

    2010-01-01

    The Telescience Support Center (TSC) team has developed a databasedriven, increment-specific Data Require - ment Document (DRD) generation tool that automates much of the work required for generating and formatting the DRD. It creates a database to load the required changes to configure the TSC data system, thus eliminating a substantial amount of labor in database entry and formatting. The TSC database contains the TSC systems configuration, along with the experimental data, in which human physiological data must be de-commutated in real time. The data for each experiment also must be cataloged and archived for future retrieval. TSC software provides tools and resources for ground operation and data distribution to remote users consisting of PIs (principal investigators), bio-medical engineers, scientists, engineers, payload specialists, and computer scientists. Operations support is provided for computer systems access, detailed networking, and mathematical and computational problems of the International Space Station telemetry data. User training is provided for on-site staff and biomedical researchers and other remote personnel in the usage of the space-bound services via the Internet, which enables significant resource savings for the physical facility along with the time savings versus traveling to NASA sites. The software used in support of the TSC could easily be adapted to other Control Center applications. This would include not only other NASA payload monitoring facilities, but also other types of control activities, such as monitoring and control of the electric grid, chemical, or nuclear plant processes, air traffic control, and the like.

  13. Software Configuration Management Guidebook

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The growth in cost and importance of software to NASA has caused NASA to address the improvement of software development across the agency. One of the products of this program is a series of guidebooks that define a NASA concept of the assurance processes which are used in software development. The Software Assurance Guidebook, SMAP-GB-A201, issued in September, 1989, provides an overall picture of the concepts and practices of NASA in software assurance. Lower level guidebooks focus on specific activities that fall within the software assurance discipline, and provide more detailed information for the manager and/or practitioner. This is the Software Configuration Management Guidebook which describes software configuration management in a way that is compatible with practices in industry and at NASA Centers. Software configuration management is a key software development process, and is essential for doing software assurance.

  14. Computational fluid dynamics evaluation of the cross-limb stent graft configuration for endovascular aneurysm repair.

    PubMed

    Shek, Tina L T; Tse, Leonard W; Nabovati, Aydin; Amon, Cristina H

    2012-12-01

    The technique of crossing the limbs of bifurcated modular stent grafts for endovascular aneurysm repair (EVAR) is often employed in the face of splayed aortic bifurcations to facilitate cannulation and prevent device kinking. However, little has been reported about the implications of cross-limb EVAR, especially in comparison to conventional EVAR. Previous computational fluid dynamics studies of conventional EVAR grafts have mostly utilized simplified planar stent graft geometries. We herein examined the differences between conventional and cross-limb EVAR by comparing their hemodynamic flow fields (i.e., in the "direct" and "cross" configurations, respectively). We also added a "planar" configuration, which is commonly found in the literature, to identify how well this configuration compares to out-of-plane stent graft configurations from a hemodynamic perspective. A representative patient's cross-limb stent graft geometry was segmented using computed tomography imaging in Mimics software. The cross-limb graft geometry was used to build its direct and planar counterparts in SolidWorks. Physiologic velocity and mass flow boundary conditions and blood properties were implemented for steady-state and pulsatile transient simulations in ANSYS CFX. Displacement forces, wall shear stress (WSS), and oscillatory shear index (OSI) were all comparable between the direct and cross configurations, whereas the planar geometry yielded very different predictions of hemodynamics compared to the out-of-plane stent graft configurations, particularly for displacement forces. This single-patient study suggests that the short-term hemodynamics involved in crossing the limbs is as safe as conventional EVAR. Higher helicity and improved WSS distribution of the cross-limb configuration suggest improved flow-related thrombosis resistance in the short term. However, there may be long-term fatigue implications to stent graft use in the cross configuration when compared to the direct configuration.

  15. ATLAS software configuration and build tool optimisation

    NASA Astrophysics Data System (ADS)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.

  16. Reusable Rack Interface Controller Common Software for Various Science Research Racks on the International Space Station

    NASA Technical Reports Server (NTRS)

    Lu, George C.

    2003-01-01

    The purpose of the EXPRESS (Expedite the PRocessing of Experiments to Space Station) rack project is to provide a set of predefined interfaces for scientific payloads which allow rapid integration into a payload rack on International Space Station (ISS). VxWorks' was selected as the operating system for the rack and payload resource controller, primarily based on the proliferation of VME (Versa Module Eurocard) products. These products provide needed flexibility for future hardware upgrades to meet everchanging science research rack configuration requirements. On the International Space Station, there are multiple science research rack configurations, including: 1) Human Research Facility (HRF); 2) EXPRESS ARIS (Active Rack Isolation System); 3) WORF (Window Observational Research Facility); and 4) HHR (Habitat Holding Rack). The RIC (Rack Interface Controller) connects payloads to the ISS bus architecture for data transfer between the payload and ground control. The RIC is a general purpose embedded computer which supports multiple communication protocols, including fiber optic communication buses, Ethernet buses, EIA-422, Mil-Std-1553 buses, SMPTE (Society Motion Picture Television Engineers)-170M video, and audio interfaces to payloads and the ISS. As a cost saving and software reliability strategy, the Boeing Payload Software Organization developed reusable common software where appropriate. These reusable modules included a set of low-level driver software interfaces to 1553B. RS232, RS422, Ethernet buses, HRDL (High Rate Data Link), video switch functionality, telemetry processing, and executive software hosted on the FUC computer. These drivers formed the basis for software development of the HRF, EXPRESS, EXPRESS ARIS, WORF, and HHR RIC executable modules. The reusable RIC common software has provided extensive benefits, including: 1) Significant reduction in development flow time; 2) Minimal rework and maintenance; 3) Improved reliability; and 4) Overall reduction in software life cycle cost. Due to the limited number of crew hours available on ISS for science research, operational efficiency is a critical customer concern. The current method of upgrading RIC software is a time consuming process; thus, an improved methodology for uploading RIC software is currently under evaluation.

  17. Titan/Centaur D-1T TC-2, Helios A flight data report. [of space missions

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Background data of spacecraft launching and flight are presented. A system analysis of the space vehicles is included, specifically on: (1) electronic equipment, (2) hydraulic equipment, (3) telemetry, (4) propulsion systems, (5) software (computers), and (6) guidance. Spacecraft and launch vehicle configurations are shown and described.

  18. A Comparative Analysis Between the Navy Standard Workweek and the Work/Rest Patterns of Sailors Aboard U.S. Navy Cruisers

    DTIC Science & Technology

    2009-09-01

    CTM) personnel install, configure, diagnose, and repair state-of-the- art electronic, computer, and network hardware/software systems ashore and...participants consisted of a Culinary Specialist and a Fire Control Technician (not actively involved with Combat System operations). 26 Figure 10

  19. Toolpack mathematical software development environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osterweil, L.

    1982-07-21

    The purpose of this research project was to produce a well integrated set of tools for the support of numerical computation. The project entailed the specification, design and implementation of both a diversity of tools and an innovative tool integration mechanism. This large configuration of tightly integrated tools comprises an environment for numerical software development, and has been named Toolpack/IST (Integrated System of Tools). Following the creation of this environment in prototype form, the environment software was readied for widespread distribution by transitioning it to a development organization for systematization, documentation and distribution. It is expected that public release ofmore » Toolpack/IST will begin imminently and will provide a basis for evaluation of the innovative software approaches taken as well as a uniform set of development tools for the numerical software community.« less

  20. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  1. Linear programming computational experience with onyx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atrek, E.

    1994-12-31

    ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.

  2. Care 3, Phase 1, volume 1

    NASA Technical Reports Server (NTRS)

    Stiffler, J. J.; Bryant, L. A.; Guccione, L.

    1979-01-01

    A computer program to aid in accessing the reliability of fault tolerant avionics systems was developed. A simple mathematical expression was used to evaluate the reliability of any redundant configuration over any interval during which the failure rates and coverage parameters remained unaffected by configuration changes. Provision was made for convolving such expressions in order to evaluate the reliability of a dual mode system. A coverage model was also developed to determine the various relevant coverage coefficients as a function of the available hardware and software fault detector characteristics, and subsequent isolation and recovery delay statistics.

  3. Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing

    DTIC Science & Technology

    2014-05-01

    Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while

  4. Computational Fluid Dynamics (CFD) investigation onto passenger car disk brake design

    NASA Astrophysics Data System (ADS)

    Munisamy, Kannan M.; Kanasan Moorthy, Shangkari K.

    2013-06-01

    The aim of this study is to investigate the flow and heat transfer in ventilated disc brakes using Computational Fluid Dynamics (CFD). NACA Series blade is designed for ventilated disc brake and the cooling characteristic is compared to the baseline design. The ventilated disc brakes are simulated using commercial CFD software FLUENTTM using simulation configuration that was obtained from experiment data. The NACA Series blade design shows improvements in Nusselt number compared to baseline design.

  5. 10 Management Controller for Time and Space Partitioning Architectures

    NASA Astrophysics Data System (ADS)

    Lachaize, Jerome; Deredempt, Marie-Helene; Galizzi, Julien

    2015-09-01

    The Integrated Modular Avionics (IMA) has been industrialized in aeronautical domain to enable the independent qualification of different application softwares from different suppliers on the same generic computer, this latter computer being a single terminal in a deterministic network. This concept allowed to distribute efficiently and transparently the different applications across the network, sizing accurately the HW equipments to embed on the aircraft, through the configuration of the virtual computers and the virtual network. , This concept has been studied for space domain and requirements issued [D04],[D05]. Experiments in the space domain have been done, for the computer level, through ESA and CNES initiatives [D02] [D03]. One possible IMA implementation may use Time and Space Partitioning (TSP) technology. Studies on Time and Space Partitioning [D02] for controlling resources access such as CPU and memories and studies on hardware/software interface standardization [D01] showed that for space domain technologies where I/O components (or IP) do not cover advanced features such as buffering, descriptors or virtualization, CPU overhead in terms of performances is mainly due to shared interface management in the execution platform, and to the high frequency of I/O accesses, these latter leading to an important number of context switches. This paper will present a solution to reduce this execution overhead with an open, modular and configurable controller.

  6. An Embedded Reconfigurable Logic Module

    NASA Technical Reports Server (NTRS)

    Tucker, Jerry H.; Klenke, Robert H.; Shams, Qamar A. (Technical Monitor)

    2002-01-01

    A Miniature Embedded Reconfigurable Computer and Logic (MERCAL) module has been developed and verified. MERCAL was designed to be a general-purpose, universal module that that can provide significant hardware and software resources to meet the requirements of many of today's complex embedded applications. This is accomplished in the MERCAL module by combining a sub credit card size PC in a DIMM form factor with a XILINX Spartan I1 FPGA. The PC has the ability to download program files to the FPGA to configure it for different hardware functions and to transfer data to and from the FPGA via the PC's ISA bus during run time. The MERCAL module combines, in a compact package, the computational power of a 133 MHz PC with up to 150,000 gate equivalents of digital logic that can be reconfigured by software. The general architecture and functionality of the MERCAL hardware and system software are described.

  7. Weaves as an Interconnection Fabric for ASIM's and Nanosatellites

    NASA Technical Reports Server (NTRS)

    Gorlick, Michael M.

    1995-01-01

    Many of the micromachines under consideration require computer support, indeed, one of the appeals of this technology is the ability to intermix mechanical, optical, analog, and digital devices on the same substrate. The amount of computer power is rarely an issue, the sticking point is the complexity of the software required to make effective use of these devices. Micromachines are the nano-technologist's equivalent of 'golden screws'. In other words, they will be piece parts in larger assemblages. For example, a nano-satellite may be composed of stacked silicon wafers where each wafer contains hundreds to thousands of micromachines, digital controllers, general purpose computers, memories, and high-speed bus interconnects. Comparatively few of these devices will be custom designed, most will be stock parts selected from libraries and catalogs. The novelty will lie in the interconnections. For example, a digital accelerometer may be a component part in an adaptive suspension, a monitoring element embedded in the wrapper of a package, or a portion of the smart skin of a launch vehicle. In each case, this device must inter-operate with other devices and probes for the purposes of command, control, and communication. We propose a software technology called 'weaves' that will permit large collections of micromachines and their attendant computers to freely intercommunicate while preserving modularity, transparency, and flexibility. Weaves are composed of networks of communicating software components. The network, and the components comprising it, may be changed even while the software, and the devices it controls, are executing. This unusual degree of software plasticity permits micromachines to dynamically adapt the software to changing conditions and allows system engineers to rapidly and inexpensively develop special purpose software by assembling stock software components in custom configurations.

  8. QuBiLS-MIDAS: a parallel free-software for molecular descriptors computation based on multilinear algebraic maps.

    PubMed

    García-Jacas, César R; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Valdés-Martiní, José R; Contreras-Torres, Ernesto

    2014-07-05

    The present report introduces the QuBiLS-MIDAS software belonging to the ToMoCoMD-CARDD suite for the calculation of three-dimensional molecular descriptors (MDs) based on the two-linear (bilinear), three-linear, and four-linear (multilinear or N-linear) algebraic forms. Thus, it is unique software that computes these tensor-based indices. These descriptors, establish relations for two, three, and four atoms by using several (dis-)similarity metrics or multimetrics, matrix transformations, cutoffs, local calculations and aggregation operators. The theoretical background of these N-linear indices is also presented. The QuBiLS-MIDAS software was developed in the Java programming language and employs the Chemical Development Kit library for the manipulation of the chemical structures and the calculation of the atomic properties. This software is composed by a desktop user-friendly interface and an Abstract Programming Interface library. The former was created to simplify the configuration of the different options of the MDs, whereas the library was designed to allow its easy integration to other software for chemoinformatics applications. This program provides functionalities for data cleaning tasks and for batch processing of the molecular indices. In addition, it offers parallel calculation of the MDs through the use of all available processors in current computers. The studies of complexity of the main algorithms demonstrate that these were efficiently implemented with respect to their trivial implementation. Lastly, the performance tests reveal that this software has a suitable behavior when the amount of processors is increased. Therefore, the QuBiLS-MIDAS software constitutes a useful application for the computation of the molecular indices based on N-linear algebraic maps and it can be used freely to perform chemoinformatics studies. Copyright © 2014 Wiley Periodicals, Inc.

  9. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinda, Peter August

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems softwaremore » for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3), although a prototype connecting Palacios with the GEM5 architectural simulator was demonstrated, our conclusion was that such a platform was less useful for design space exploration than anticipated due to inherent complexity of the connection between the instruction set architecture level and the microarchitectural level. For effort (4), we found that a code injection approach proved to be more fruitful. The results of our efforts are publicly available in the open source Palacios codebase and published papers, all of which are available from the project web site, v3vee.org. Palacios is currently one of the two codebases (the other being Sandia’s Kitten lightweight kernel) that underlies the node operating system for the DOE Hobbes Project, one of two projects tasked with building a systems software prototype for the national exascale computing effort.« less

  10. PIV/HPIV Film Analysis Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.

  11. Aircraft Design Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Successful commercialization of the AirCraft SYNThesis (ACSYNT) tool has resulted in the creation of Phoenix Integration, Inc. ACSYNT has been exclusively licensed to the company, an outcome of a seven year, $3 million effort to provide unique software technology to a focused design engineering market. Ames Research Center formulated ACSYNT and in working with the Virginia Polytechnic Institute CAD Laboratory, began to design and code a computer-aided design for ACSYNT. Using a Joint Sponsored Research Agreement, Ames formed an industry-government-university alliance to improve and foster research and development for the software. As a result of the ACSYNT Institute, the software is becoming a predominant tool for aircraft conceptual design. ACSYNT has been successfully applied to high- speed civil transport configuration, subsonic transports, and supersonic fighters.

  12. Chimera Grid Tools

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  13. SU (2) lattice gauge theory simulations on Fermi GPUs

    NASA Astrophysics Data System (ADS)

    Cardoso, Nuno; Bicudo, Pedro

    2011-05-01

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200× the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2× slower) than single precision computations.

  14. e-Stars Template Builder

    NASA Technical Reports Server (NTRS)

    Cox, Brian

    2003-01-01

    e-Stars Template Builder is a computer program that implements a concept of enabling users to rapidly gain access to information on projects of NASA's Jet Propulsion Laboratory. The information about a given project is not stored in a data base, but rather, in a network that follows the project as it develops. e-Stars Template Builder resides on a server computer, using Practical Extraction and Reporting Language (PERL) scripts to create what are called "e-STARS node templates," which are software constructs that allow for project-specific configurations. The software resides on the server and does not require specific software on the user machine except for an Internet browser. A user's computer need not be equipped with special software (other than an Internet-browser program). e-Stars Template Builder is compatible with Windows, Macintosh, and UNIX operating systems. A user invokes e-Stars Template Builder from a browser window. Operations that can be performed by the user include the creation of child processes and the addition of links and descriptions of documentation to existing pages or nodes. By means of this addition of "child processes" of nodes, a network that reflects the development of a project is generated.

  15. The MUSOS (MUsic SOftware System) Toolkit: A computer-based, open source application for testing memory for melodies.

    PubMed

    Rainsford, M; Palmer, M A; Paine, G

    2018-04-01

    Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.

  16. Secure Sensor Platform Software Utilities v.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hymel, Ross

    The SSP Software package allows a user to connect to a CoCIM via a Personality Programmer and: Reset the firmware of the CoCIM using the SSP Personality Programmer. The changes that can be made include things such as: Recovering from a tamper event; Resetting the initialization date and message counter; Change configuration values of the CoCIM using the SSP Seal Configuration or the RMSA Configuration File Editor programs. Configuration values that can be set will depend on what version of the CoCIM firmware is being used, but can include: The IP address of the translator with which this CoCIM (ormore » RMSA) communicates; The number of attempts the CoCIM (or RMSA) will attempt to contact the translator; The primary CoCIM (or RMSA) channel; The secondary CoCIM (or RMSA) channel; Locations of files containing CoCIM (or RMSA) encryption keys SSPSerialDataDumper downloads a CoCIM’s stored messages to a computer connected to the CoCIM via a serial cable; SSPLogAnalyzer decrypts and authenticates messages that have been downloaded using the Serial Data Dumper program and then displays the messages values.« less

  17. Generic robot architecture

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  18. Utilization of Virtual Server Technology in Mission Operations

    NASA Technical Reports Server (NTRS)

    Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.

    2010-01-01

    Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.

  19. Virtualization in the Operations Environments

    NASA Technical Reports Server (NTRS)

    Pitts, Lee; Lankford, Kim; Felton, Larry; Pruitt, Robert

    2010-01-01

    Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.

  20. A High-Availability, Distributed Hardware Control System Using Java

    NASA Technical Reports Server (NTRS)

    Niessner, Albert F.

    2011-01-01

    Two independent coronagraph experiments that require 24/7 availability with different optical layouts and different motion control requirements are commanded and controlled with the same Java software system executing on many geographically scattered computer systems interconnected via TCP/IP. High availability of a distributed system requires that the computers have a robust communication messaging system making the mix of TCP/IP (a robust transport), and XML (a robust message) a natural choice. XML also adds the configuration flexibility. Java then adds object-oriented paradigms, exception handling, heavily tested libraries, and many third party tools for implementation robustness. The result is a software system that provides users 24/7 access to two diverse experiments with XML files defining the differences

  1. IPv6 testing and deployment at Prague Tier 2

    NASA Astrophysics Data System (ADS)

    Kouba, Tomáŝ; Chudoba, Jiří; Eliáŝ, Marek; Fiala, Lukáŝ

    2012-12-01

    Computing Center of the Institute of Physics in Prague provides computing and storage resources for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT. However this solution demands more difficult routing setup. We see the IPv6 deployment as a solution that provides less routing, more switching and therefore promises higher network throughput. The administrators of the Computing Center strive to configure and install all provided services automatically. For installation tasks we use PXE and kickstart, for network configuration we use DHCP and for software configuration we use CFEngine. Many hardware boxes are configured via specific web pages or telnet/ssh protocol provided by the box itself. All our services are monitored with several tools e.g. Nagios, Munin, Ganglia. We rely heavily on the SNMP protocol for hardware health monitoring. All these installation, configuration and monitoring tools must be tested before we can switch completely to IPv6 network stack. In this contribution we present the tests we have made, limitations we have faced and configuration decisions that we have made during IPv6 testing. We also present testbed built on virtual machines that was used for all the testing and evaluation.

  2. Gas flow calculation method of a ramjet engine

    NASA Astrophysics Data System (ADS)

    Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir

    2017-11-01

    At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.

  3. Automated Test for NASA CFS

    NASA Technical Reports Server (NTRS)

    McComas, David C.; Strege, Susanne L.; Carpenter, Paul B. Hartman, Randy

    2015-01-01

    The core Flight System (cFS) is a flight software (FSW) product line developed by the Flight Software Systems Branch (FSSB) at NASA's Goddard Space Flight Center (GSFC). The cFS uses compile-time configuration parameters to implement variable requirements to enable portability across embedded computing platforms and to implement different end-user functional needs. The verification and validation of these requirements is proving to be a significant challenge. This paper describes the challenges facing the cFS and the results of a pilot effort to apply EXB Solution's testing approach to the cFS applications.

  4. Software Tools for Emittance Measurement and Matching for 12 GeV CEBAF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dennis L.

    2016-05-01

    This paper discusses model-driven setup of the Continuous Electron Beam Accelerator Facility (CEBAF) for the 12GeV era, focusing on qsUtility. qsUtility is a set of software tools created to perform emittance measurements, analyze those measurements, and compute optics corrections based upon the measurements.qsUtility was developed as a toolset to facilitate reducing machine configuration time and reproducibility by way of an accurate accelerator model, and to provide Operations staff with tools to measure and correct machine optics with little or no assistance from optics experts.

  5. Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments

    NASA Astrophysics Data System (ADS)

    Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.

    Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

  6. Simulating pad-electrodes with high-definition arrays in transcranial electric stimulation

    NASA Astrophysics Data System (ADS)

    Kempe, René; Huang, Yu; Parra, Lucas C.

    2014-04-01

    Objective. Research studies on transcranial electric stimulation, including direct current, often use a computational model to provide guidance on the placing of sponge-electrode pads. However, the expertise and computational resources needed for finite element modeling (FEM) make modeling impractical in a clinical setting. Our objective is to make the exploration of different electrode configurations accessible to practitioners. We provide an efficient tool to estimate current distributions for arbitrary pad configurations while obviating the need for complex simulation software. Approach. To efficiently estimate current distributions for arbitrary pad configurations we propose to simulate pads with an array of high-definition (HD) electrodes and use an efficient linear superposition to then quickly evaluate different electrode configurations. Main results. Numerical results on ten different pad configurations on a normal individual show that electric field intensity simulated with the sampled array deviates from the solutions with pads by only 5% and the locations of peak magnitude fields have a 94% overlap when using a dense array of 336 electrodes. Significance. Computationally intensive FEM modeling of the HD array needs to be performed only once, perhaps on a set of standard heads that can be made available to multiple users. The present results confirm that by using these models one can now quickly and accurately explore and select pad-electrode montages to match a particular clinical need.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less

  8. A Study of the Relationship of Communication Technology Configurations in Virtual Research Environments and Effectiveness of Collaborative Research

    ERIC Educational Resources Information Center

    Ahmed, Iftekhar

    2009-01-01

    Virtual Research Environments (VRE) are electronic meeting places for interaction among scientists created by combining software tools and computer networking. Virtual teams are enjoying increased importance in the conduct of scientific research because of the rising cost of traditional scientific scholarly communication, the growing importance of…

  9. 77 FR 50727 - Configuration Management Plans for Digital Computer Software Used in Safety Systems of Nuclear...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... comments received on or before this date. Although a time limit is given, comments and suggestions in... guides are encouraged at any time. ADDRESSES: You may access information and comment submissions related... Research, U.S. Nuclear Regulatory Commission, Washington, DC 20555-0001; telephone: 301-251-7494 or email...

  10. Use of software tools in the development of real time software systems

    NASA Technical Reports Server (NTRS)

    Garvey, R. C.

    1981-01-01

    The transformation of a preexisting software system into a larger and more versatile system with different mission requirements is discussed. The history of this transformation is used to illustrate the use of structured real time programming techniques and tools to produce maintainable and somewhat transportable systems. The predecessor system is a single ground diagnostic system; its purpose is to exercise a computer controlled hardware set prior to its deployment in its functional environment, as well as test the equipment set by supplying certain well known stimulas. The successor system (FTE) is required to perform certain testing and control functions while this hardware set is in its functional environment. Both systems must deal with heavy user input/output loads and a new I/O requirement is included in the design of the FTF system. Human factors are enhanced by adding an improved console interface and special function keyboard handler. The additional features require the inclusion of much new software to the original set from which FTF was developed. As a result, it is necessary to split the system into a duel programming configuration with high rates of interground communications. A generalized information routing mechanism is used to support this configuration.

  11. OASYS (OrAnge SYnchrotron Suite): an open-source graphical environment for x-ray virtual experiments

    NASA Astrophysics Data System (ADS)

    Rebuffi, Luca; Sanchez del Rio, Manuel

    2017-08-01

    The evolution of the hardware platforms, the modernization of the software tools, the access to the codes of a large number of young people and the popularization of the open source software for scientific applications drove us to design OASYS (ORange SYnchrotron Suite), a completely new graphical environment for modelling X-ray experiments. The implemented software architecture allows to obtain not only an intuitive and very-easy-to-use graphical interface, but also provides high flexibility and rapidity for interactive simulations, making configuration changes to quickly compare multiple beamline configurations. Its purpose is to integrate in a synergetic way the most powerful calculation engines available. OASYS integrates different simulation strategies via the implementation of adequate simulation tools for X-ray Optics (e.g. ray tracing and wave optics packages). It provides a language to make them to communicate by sending and receiving encapsulated data. Python has been chosen as main programming language, because of its universality and popularity in scientific computing. The software Orange, developed at the University of Ljubljana (SLO), is the high level workflow engine that provides the interaction with the user and communication mechanisms.

  12. "Cloud" functions and templates of engineering calculations for nuclear power plants

    NASA Astrophysics Data System (ADS)

    Ochkov, V. F.; Orlov, K. A.; Ko, Chzho Ko

    2014-10-01

    The article deals with an important problem of setting up computer-aided design calculations of various circuit configurations and power equipment carried out using the templates and standard computer programs available in the Internet. Information about the developed Internet-based technology for carrying out such calculations using the templates accessible in the Mathcad Prime software package is given. The technology is considered taking as an example the solution of two problems relating to the field of nuclear power engineering.

  13. [Porting Radiotherapy Software of Varian to Cloud Platform].

    PubMed

    Zou, Lian; Zhang, Weisha; Liu, Xiangxiang; Xie, Zhao; Xie, Yaoqin

    2017-09-30

    To develop a low-cost private cloud platform of radiotherapy software. First, a private cloud platform which was based on OpenStack and the virtual GPU hardware was builded. Then on the private cloud platform, all the Varian radiotherapy software modules were installed to the virtual machine, and the corresponding function configuration was completed. Finally the software on the cloud was able to be accessed by virtual desktop client. The function test results of the cloud workstation show that a cloud workstation is equivalent to an isolated physical workstation, and any clients on the LAN can use the cloud workstation smoothly. The cloud platform transplantation in this study is economical and practical. The project not only improves the utilization rates of radiotherapy software, but also makes it possible that the cloud computing technology can expand its applications to the field of radiation oncology.

  14. Observation-Driven Configuration of Complex Software Systems

    NASA Astrophysics Data System (ADS)

    Sage, Aled

    2010-06-01

    The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers.

  15. Root Canal Morphology and Configuration of 118 Mandibular First Molars by Means of Micro-Computed Tomography: An Ex Vivo Study.

    PubMed

    Wolf, Thomas Gerhard; Paqué, Frank; Zeller, Maximilian; Willershausen, Brita; Briseño-Marroquín, Benjamín

    2016-04-01

    The aim of this study was to investigate the root canal system morphology of the mandibular first molar by means of micro-computed tomography. The root canal configuration, foramina, and accessory canals frequency of 118 mandibular first molars were investigated by means of micro-computed tomography and 3-dimensional software imaging. A 4-digit system describes the root canal configuration from the coronal to apical thirds and the main foramina number. The most frequent root canal configurations in mesial root were 2-2-2/2 (31.4%), 2-2-1/1 (15.3%), and 2-2-2/3 (11.9%); another 24 different root canal configurations were observed in this root. A 1-1-1/1 (58.5%), 1-1-1/2 (10.2%), and 16 other root canal configurations were observed in the distal root. The mesiobuccal root canal showed 1-4 foramina in 24.6%, and the mesiolingual showed 1-3 foramina in 28.0%. One connecting canal between the mesial root canals was observed in 30.5% and 2 in 3.4%. The distolingual root canal showed 1-4 foramina in 23.7%, whereas a foramen in the distobuccal root canal was rarely detected (3.4%). The mesiobuccal, mesiolingual, and distolingual root canals showed at least 1 accessory canal (14.3, 10.2, and 4.2%, respectively), but the distobuccal had none. The root canal configuration of mandibular first molars varies strongly. According to our expectations, both the mesial and distal roots showed a high number of morphologic diversifications. The root canal system of the mesial root showed more root canal configuration variations, connecting and accessory canals than the distal root. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  16. Wireless Sensor Networks Approach

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.

    2003-01-01

    This viewgraph presentation provides information on hardware and software configurations for a network architecture for sensors. The hardware configuration uses a central station and remote stations. The software configuration uses the 'lost station' software algorithm. The presentation profiles a couple current examples of this network architecture in use.

  17. A Scheduling Algorithm for Computational Grids that Minimizes Centralized Processing in Genome Assembly of Next-Generation Sequencing Data

    PubMed Central

    Lima, Jakelyne; Cerdeira, Louise Teixeira; Bol, Erick; Schneider, Maria Paula Cruz; Silva, Artur; Azevedo, Vasco; Abelém, Antônio Jorge Gomes

    2012-01-01

    Improvements in genome sequencing techniques have resulted in generation of huge volumes of data. As a consequence of this progress, the genome assembly stage demands even more computational power, since the incoming sequence files contain large amounts of data. To speed up the process, it is often necessary to distribute the workload among a group of machines. However, this requires hardware and software solutions specially configured for this purpose. Grid computing try to simplify this process of aggregate resources, but do not always offer the best performance possible due to heterogeneity and decentralized management of its resources. Thus, it is necessary to develop software that takes into account these peculiarities. In order to achieve this purpose, we developed an algorithm aimed to optimize the functionality of de novo assembly software ABySS in order to optimize its operation in grids. We run ABySS with and without the algorithm we developed in the grid simulator SimGrid. Tests showed that our algorithm is viable, flexible, and scalable even on a heterogeneous environment, which improved the genome assembly time in computational grids without changing its quality. PMID:22461785

  18. User's Manual for FOMOCO Utilities-Force and Moment Computation Tools for Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Buning, Pieter G.

    1996-01-01

    In the numerical computations of flows around complex configurations, accurate calculations of force and moment coefficients for aerodynamic surfaces are required. When overset grid methods are used, the surfaces on which force and moment coefficients are sought typically consist of a collection of overlapping surface grids. Direct integration of flow quantities on the overlapping grids would result in the overlapped regions being counted more than once. The FOMOCO Utilities is a software package for computing flow coefficients (force, moment, and mass flow rate) on a collection of overset surfaces with accurate accounting of the overlapped zones. FOMOCO Utilities can be used in stand-alone mode or in conjunction with the Chimera overset grid compressible Navier-Stokes flow solver OVERFLOW. The software package consists of two modules corresponding to a two-step procedure: (1) hybrid surface grid generation (MIXSUR module), and (2) flow quantities integration (OVERINT module). Instructions on how to use this software package are described in this user's manual. Equations used in the flow coefficients calculation are given in Appendix A.

  19. System and Method for Providing a Climate Data Persistence Service

    NASA Technical Reports Server (NTRS)

    Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)

    2018-01-01

    A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.

  20. The economics of data acquisition computers for ST and MST radars

    NASA Technical Reports Server (NTRS)

    Watkins, B. J.

    1983-01-01

    Some low cost options for data acquisition computers for ST (stratosphere, troposphere) and MST (mesosphere, stratosphere, troposphere) are presented. The particular equipment discussed reflects choices made by the University of Alaska group but of course many other options exist. The low cost microprocessor and array processor approach presented here has several advantages because of its modularity. An inexpensive system may be configured for a minimum performance ST radar, whereas a multiprocessor and/or a multiarray processor system may be used for a higher performance MST radar. This modularity is important for a network of radars because the initial cost is minimized while future upgrades will still be possible at minimal expense. This modularity also aids in lowering the cost of software development because system expansions should rquire little software changes. The functions of the radar computer will be to obtain Doppler spectra in near real time with some minor analysis such as vector wind determination.

  1. Remote Adaptive Motor Resistance Training Exercise Apparatus and Method of Use Thereof

    NASA Technical Reports Server (NTRS)

    Reich, Alton (Inventor); Shaw, James (Inventor)

    2017-01-01

    The invention comprises a method and/or an apparatus using a computer configured exercise system equipped with an electric motor to provide physical resistance to user motion in conjunction with means for sharing exercise system related data and/or user performance data with a secondary user, such as a medical professional, a physical therapist, a trainer, a computer generated competitor, and/or a human competitor. For example, the exercise system is used with a remote trainer to enhance exercise performance, with a remote medical professional for rehabilitation, and/or with a competitor in a competition, such as in a power/weightlifting competition or in a video game. The exercise system is optionally configured with an intelligent software assistant and knowledge navigator functioning as a personal assistant application.

  2. Remote Adaptive Motor Resistance Training Exercise Apparatus and Method of Use Thereof

    NASA Technical Reports Server (NTRS)

    Shaw, James (Inventor); Reich, Alton (Inventor)

    2016-01-01

    The invention comprises a method and/or an apparatus using a computer configured exercise system equipped with an electric motor to provide physical resistance to user motion in conjunction with means for sharing exercise system related data and/or user performance data with a secondary user, such as a medical professional, a physical therapist, a trainer, a computer generated competitor, and/or a human competitor. For example, the exercise system is used with a remote trainer to enhance exercise performance, with a remote medical professional for rehabilitation, and/or with a competitor in a competition, such as in a power/weightlifting competition or in a video game. The exercise system is optionally configured with an intelligent software assistant and knowledge navigator functioning as a personal assistant application.

  3. BlueSNP: R package for highly scalable genome-wide association studies using Hadoop clusters.

    PubMed

    Huang, Hailiang; Tata, Sandeep; Prill, Robert J

    2013-01-01

    Computational workloads for genome-wide association studies (GWAS) are growing in scale and complexity outpacing the capabilities of single-threaded software designed for personal computers. The BlueSNP R package implements GWAS statistical tests in the R programming language and executes the calculations across computer clusters configured with Apache Hadoop, a de facto standard framework for distributed data processing using the MapReduce formalism. BlueSNP makes computationally intensive analyses, such as estimating empirical p-values via data permutation, and searching for expression quantitative trait loci over thousands of genes, feasible for large genotype-phenotype datasets. http://github.com/ibm-bioinformatics/bluesnp

  4. Construction of a robust, large-scale, collaborative database for raw data in computational chemistry: the Collaborative Chemistry Database Tool (CCDBT).

    PubMed

    Chen, Mingyang; Stott, Amanda C; Li, Shenggang; Dixon, David A

    2012-04-01

    A robust metadata database called the Collaborative Chemistry Database Tool (CCDBT) for massive amounts of computational chemistry raw data has been designed and implemented. It performs data synchronization and simultaneously extracts the metadata. Computational chemistry data in various formats from different computing sources, software packages, and users can be parsed into uniform metadata for storage in a MySQL database. Parsing is performed by a parsing pyramid, including parsers written for different levels of data types and sets created by the parser loader after loading parser engines and configurations. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.

  6. Leap Motion Gesture Control With Carestream Software in the Operating Room to Control Imaging: Installation Guide and Discussion.

    PubMed

    Pauchot, Julien; Di Tommaso, Laetitia; Lounis, Ahmed; Benassarou, Mourad; Mathieu, Pierre; Bernot, Dominique; Aubry, Sébastien

    2015-12-01

    Nowadays, routine cross-sectional imaging viewing during a surgical procedure requires physical contact with an interface (mouse or touch-sensitive screen). Such contact risks exposure to aseptic conditions and causes loss of time. Devices such as the recently introduced Leap Motion (Leap Motion Society, San Francisco, CA), which enables interaction with the computer without any physical contact, are of wide interest in the field of surgery, but configuration and ergonomics are key challenges for the practitioner, imaging software, and surgical environment. This article aims to suggest an easy configuration of Leap Motion on a PC for optimized use with Carestream Vue PACS v11.3.4 (Carestream Health, Inc, Rochester, NY) using a plug-in (to download at https://drive.google.com/open?id=0B_F4eBeBQc3yNENvTXlnY09qS00&authuser=0) and a video tutorial (https://www.youtube.com/watch?v=yVPTgxg-SIk). Videos of surgical procedure and discussion about innovative gesture control technology and its various configurations are provided in this article. © The Author(s) 2015.

  7. Ruby on Rails Issue Tracker

    NASA Technical Reports Server (NTRS)

    Rodriguez, Juan Jared

    2014-01-01

    The purpose of this report is to detail the tasks accomplished as a NASA NIFS intern for the summer 2014 session. This internship opportunity is to develop an issue tracker Ruby on Rails web application to improve the communication of developmental anomalies between the Support Software Computer Software Configuration Item (CSCI) teams, System Build and Information Architecture. As many may know software development is an arduous, time consuming, collaborative effort. It involves nearly as much work designing, planning, collaborating, discussing, and resolving issues as effort expended in actual development. This internship opportunity was put in place to help alleviate the amount of time spent discussing issues such as bugs, missing tests, new requirements, and usability concerns that arise during development and throughout the life cycle of software applications once in production.

  8. A Discussion of Using a Reconfigurable Processor to Implement the Discrete Fourier Transform

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2004-01-01

    This paper presents the design and implementation of the Discrete Fourier Transform (DFT) algorithm on a reconfigurable processor system. While highly applicable to many engineering problems, the DFT is an extremely computationally intensive algorithm. Consequently, the eventual goal of this work is to enhance the execution of a floating-point precision DFT algorithm by off loading the algorithm from the computing system. This computing system, within the context of this research, is a typical high performance desktop computer with an may of field programmable gate arrays (FPGAs). FPGAs are hardware devices that are configured by software to execute an algorithm. If it is desired to change the algorithm, the software is changed to reflect the modification, then download to the FPGA, which is then itself modified. This paper will discuss methodology for developing the DFT algorithm to be implemented on the FPGA. We will discuss the algorithm, the FPGA code effort, and the results to date.

  9. Numerical evaluation of an innovative cup layout for open volumetric solar air receivers

    NASA Astrophysics Data System (ADS)

    Cagnoli, Mattia; Savoldi, Laura; Zanino, Roberto; Zaversky, Fritz

    2016-05-01

    This paper proposes an innovative volumetric solar absorber design to be used in high-temperature air receivers of solar power tower plants. The innovative absorber, a so-called CPC-stacked-plate configuration, applies the well-known principle of a compound parabolic concentrator (CPC) for the first time in a volumetric solar receiver, heating air to high temperatures. The proposed absorber configuration is analyzed numerically, applying first the open-source ray-tracing software Tonatiuh in order to obtain the solar flux distribution on the absorber's surfaces. Next, a Computational Fluid Dynamic (CFD) analysis of a representative single channel of the innovative receiver is performed, using the commercial CFD software ANSYS Fluent. The solution of the conjugate heat transfer problem shows that the behavior of the new absorber concept is promising, however further optimization of the geometry will be necessary in order to exceed the performance of the classical absorber designs.

  10. Unified Geophysical Cloud Platform (UGCP) for Seismic Monitoring and other Geophysical Applications.

    NASA Astrophysics Data System (ADS)

    Synytsky, R.; Starovoit, Y. O.; Henadiy, S.; Lobzakov, V.; Kolesnikov, L.

    2016-12-01

    We present Unified Geophysical Cloud Platform (UGCP) or UniGeoCloud as an innovative approach for geophysical data processing in the Cloud environment with the ability to run any type of data processing software in isolated environment within the single Cloud platform. We've developed a simple and quick method of several open-source widely known software seismic packages (SeisComp3, Earthworm, Geotool, MSNoise) installation which does not require knowledge of system administration, configuration, OS compatibility issues etc. and other often annoying details preventing time wasting for system configuration work. Installation process is simplified as "mouse click" on selected software package from the Cloud market place. The main objective of the developed capability was the software tools conception with which users are able to design and install quickly their own highly reliable and highly available virtual IT-infrastructure for the organization of seismic (and in future other geophysical) data processing for either research or monitoring purposes. These tools provide access to any seismic station data available in open IP configuration from the different networks affiliated with different Institutions and Organizations. It allows also setting up your own network as you desire by selecting either regionally deployed stations or the worldwide global network based on stations selection form the global map. The processing software and products and research results could be easily monitored from everywhere using variety of user's devices form desk top computers to IT gadgets. Currents efforts of the development team are directed to achieve Scalability, Reliability and Sustainability (SRS) of proposed solutions allowing any user to run their applications with the confidence of no data loss and no failure of the monitoring or research software components. The system is suitable for quick rollout of NDC-in-Box software package developed for State Signatories and aimed for promotion of data processing collected by the IMS Network.

  11. Computer-aided navigation in dental implantology: 7 years of clinical experience.

    PubMed

    Ewers, Rolf; Schicho, Kurt; Truppe, Michael; Seemann, Rudolf; Reichwein, Astrid; Figl, Michael; Wagner, Arne

    2004-03-01

    This long-term study gives a review over 7 years of research, development, and routine clinical application of computer-aided navigation technology in dental implantology. Benefits and disadvantages of up-to-date technologies are discussed. In the course of the current advancement, various hardware and software configurations are used. In the initial phase, universally applicable navigation software is adapted for implantology. Since 2001, a special software module for dental implantology is available. Preoperative planning is performed on the basis of prosthetic aspects and requirements. In clinical routine use, patient and drill positions are intraoperatively registered by means of optoelectronic tracking systems; during preclinical tests, electromagnetic trackers are also used. In 7 years (1995 to 2002), 55 patients with 327 dental implants were successfully positioned with computer-aided navigation technology. The mean number of implants per patient was 6 (minimum, 1; maximum, 11). No complications were observed; the preoperative planning could be exactly realized. The average expenditure of time for the preparation of a surgical intervention with navigation decreased from 2 to 3 days in the initial phase to one-half day in clinical routine use with software that is optimized for dental implantology. The use of computer-aided navigation technology can contribute to considerable quality improvement. Preoperative planning is exactly realized and intraoperative safety is increased, because damage to nerves or neighboring teeth can be avoided.

  12. BioSPICE: access to the most current computational tools for biologists.

    PubMed

    Garvey, Thomas D; Lincoln, Patrick; Pedersen, Charles John; Martin, David; Johnson, Mark

    2003-01-01

    The goal of the BioSPICE program is to create a framework that provides biologists access to the most current computational tools. At the program midpoint, the BioSPICE member community has produced a software system that comprises contributions from approximately 20 participating laboratories integrated under the BioSPICE Dashboard and a methodology for continued software integration. These contributed software modules are the BioSPICE Dashboard, a graphical environment that combines Open Agent Architecture and NetBeans software technologies in a coherent, biologist-friendly user interface. The current Dashboard permits data sources, models, simulation engines, and output displays provided by different investigators and running on different machines to work together across a distributed, heterogeneous network. Among several other features, the Dashboard enables users to create graphical workflows by configuring and connecting available BioSPICE components. Anticipated future enhancements to BioSPICE include a notebook capability that will permit researchers to browse and compile data to support model building, a biological model repository, and tools to support the development, control, and data reduction of wet-lab experiments. In addition to the BioSPICE software products, a project website supports information exchange and community building.

  13. Mobility analysis, simulation, and scale model testing for the design of wheeled planetary rovers

    NASA Technical Reports Server (NTRS)

    Lindemann, Randel A.; Eisen, Howard J.

    1993-01-01

    The use of computer based techniques to model and simulate wheeled rovers on rough natural terrains is considered. Physical models of a prototype vehicle can be used to test the correlation of the simulations in scaled testing. The computer approaches include a quasi-static planar or two dimensional analysis and design tool based on the traction necessary for the vehicle to have imminent mobility. The computer program modeled a six by six wheel drive vehicle of original kinematic configuration, called the Rocker Bogie. The Rocker Bogie was optimized using the quasi-static software with respect to its articulation parameters prior to fabrication of a prototype. In another approach used, the dynamics of the Rocker Bogie vehicle in 3-D space was modeled on an engineering workstation using commercial software. The model included the complex and nonlinear interaction of the tire and terrain. The results of the investigation yielded numerical and graphical results of the rover traversing rough terrain on the earth, moon, and Mars. In addition, animations of the rover excursions were also generated. A prototype vehicle was then used in a series of testbed and field experiments. Correspondence was then established between the computer models and the physical model. The results indicated the utility of the quasi-static tool for configurational design, as well as the predictive ability of the 3-D simulation to model the dynamic behavior of the vehicle over short traverses.

  14. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the class of applications with moderate input-output data rates but large intermediate multi-thread data streams has been addressed and mitigated. This opens a new class of satellite image processing applications for bottleneck problems solution using RC technologies. The issue of a science algorithm level of abstraction necessary for RC hardware implementation is also described. Selected Matlab functions already implemented in hardware were investigated for their direct applicability to the GOES-8 application with the intent to create a library of Matlab and IDL RC functions for ongoing work. A complete class of spacecraft image processing applications using embedded re-configurable computing technology to meet real-time requirements, including performance results and comparison with the existing system, is described in this paper.

  15. Agent-Based Computational Modeling of Cell Culture: Understanding Dosimetry In Vitro as Part of In Vitro to In Vivo Extrapolation

    EPA Science Inventory

    Quantitative characterization of cellular dose in vitro is needed for alignment of doses in vitro and in vivo. We used the agent-based software, CompuCell3D (CC3D), to provide a stochastic description of cell growth in culture. The model was configured so that isolated cells assu...

  16. Mathematical models for space shuttle ground systems

    NASA Technical Reports Server (NTRS)

    Tory, E. G.

    1985-01-01

    Math models are a series of algorithms, comprised of algebraic equations and Boolean Logic. At Kennedy Space Center, math models for the Space Shuttle Systems are performed utilizing the Honeywell 66/80 digital computers, Modcomp II/45 Minicomputers and special purpose hardware simulators (MicroComputers). The Shuttle Ground Operations Simulator operating system provides the language formats, subroutines, queueing schemes, execution modes and support software to write, maintain and execute the models. The ground systems presented consist primarily of the Liquid Oxygen and Liquid Hydrogen Cryogenic Propellant Systems, as well as liquid oxygen External Tank Gaseous Oxygen Vent Hood/Arm and the Vehicle Assembly Building (VAB) High Bay Cells. The purpose of math modeling is to simulate the ground hardware systems and to provide an environment for testing in a benign mode. This capability allows the engineers to check out application software for loading and launching the vehicle, and to verify the Checkout, Control, & Monitor Subsystem within the Launch Processing System. It is also used to train operators and to predict system response and status in various configurations (normal operations, emergency and contingent operations), including untried configurations or those too dangerous to try under real conditions, i.e., failure modes.

  17. Software architecture of the III/FBI segment of the FBI's integrated automated identification system

    NASA Astrophysics Data System (ADS)

    Booker, Brian T.

    1997-02-01

    This paper will describe the software architecture of the Interstate Identification Index (III/FBI) Segment of the FBI's Integrated Automated Fingerprint Identification System (IAFIS). IAFIS is currently under development, with deployment to begin in 1998. III/FBI will provide the repository of criminal history and photographs for criminal subjects, as well as identification data for military and civilian federal employees. Services provided by III/FBI include maintenance of the criminal and civil data, subject search of the criminal and civil data, and response generation services for IAFIS. III/FBI software will be comprised of both COTS and an estimated 250,000 lines of developed C code. This paper will describe the following: (1) the high-level requirements of the III/FBI software; (2) the decomposition of the III/FBI software into Computer Software Configuration Items (CSCIs); (3) the top-level design of the III/FBI CSCIs; and (4) the relationships among the developed CSCIs and the COTS products that will comprise the III/FBI software.

  18. A knowledge based software engineering environment testbed

    NASA Technical Reports Server (NTRS)

    Gill, C.; Reedy, A.; Baker, L.

    1985-01-01

    The Carnegie Group Incorporated and Boeing Computer Services Company are developing a testbed which will provide a framework for integrating conventional software engineering tools with Artifical Intelligence (AI) tools to promote automation and productivity. The emphasis is on the transfer of AI technology to the software development process. Experiments relate to AI issues such as scaling up, inference, and knowledge representation. In its first year, the project has created a model of software development by representing software activities; developed a module representation formalism to specify the behavior and structure of software objects; integrated the model with the formalism to identify shared representation and inheritance mechanisms; demonstrated object programming by writing procedures and applying them to software objects; used data-directed and goal-directed reasoning to, respectively, infer the cause of bugs and evaluate the appropriateness of a configuration; and demonstrated knowledge-based graphics. Future plans include introduction of knowledge-based systems for rapid prototyping or rescheduling; natural language interfaces; blackboard architecture; and distributed processing

  19. SU (2) lattice gauge theory simulations on Fermi GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Nuno, E-mail: nunocardoso@cftp.ist.utl.p; Bicudo, Pedro, E-mail: bicudo@ist.utl.p

    2011-05-10

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes formore » the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.« less

  20. A Computing Infrastructure for Supporting Climate Studies

    NASA Astrophysics Data System (ADS)

    Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team

    2011-12-01

    Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.

  1. Exploratory research for the development of a computer aided software design environment with the software technology program

    NASA Technical Reports Server (NTRS)

    Hardwick, Charles

    1991-01-01

    Field studies were conducted by MCC to determine areas of research of mutual interest to MCC and JSC. NASA personnel from the Information Systems Directorate and research faculty from UHCL/RICIS visited MCC in Austin, Texas to examine tools and applications under development in the MCC Software Technology Program. MCC personnel presented workshops in hypermedia, design knowledge capture, and design recovery on site at JSC for ISD personnel. The following programs were installed on workstations in the Software Technology Lab, NASA/JSC: (1) GERM (Graphic Entity Relations Modeler); (2) gIBIS (Graphic Issues Based Information System); and (3) DESIRE (Design Recovery tool). These applications were made available to NASA for inspection and evaluation. Programs developed in the MCC Software Technology Program run on the SUN workstation. The programs do not require special configuration, but they will require larger than usual amounts of disk space and RAM to operate properly.

  2. Software design for analysis of multichannel intracardial and body surface electrocardiograms.

    PubMed

    Potse, Mark; Linnenbank, André C; Grimbergen, Cornelis A

    2002-11-01

    Analysis of multichannel ECG recordings (body surface maps (BSMs) and intracardial maps) requires special software. We created a software package and a user interface on top of a commercial data analysis package (MATLAB) by a combination of high-level and low-level programming. Our software was created to satisfy the needs of a diverse group of researchers. It can handle a large variety of recording configurations. It allows for interactive usage through a fast and robust user interface, and batch processing for the analysis of large amounts of data. The package is user-extensible, includes routines for both common and experimental data processing tasks, and works on several computer platforms. The source code is made intelligible using software for structured documentation and is available to the users. The package is currently used by more than ten research groups analysing ECG data worldwide.

  3. Man-rated flight software for the F-8 DFBW program

    NASA Technical Reports Server (NTRS)

    Bairnsfather, R. R.

    1975-01-01

    The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program Assembly Control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools--the all-digital simulator, the hybrid simulator, and the Iron Bird simulator--are described, as well as the program test plans and their implementation on the various simulators. Failure-effects analysis and the creation of special failure-generating software for testing purposes are described. The quality of the end product is evidenced by the F-8 DFBW flight test program in which 42 flights, totaling 58 hours of flight time, were successfully made without any DFCS inflight software, or hardware, failures.

  4. Current trends in hardware and software for brain-computer interfaces (BCIs)

    NASA Astrophysics Data System (ADS)

    Brunner, P.; Bianchi, L.; Guger, C.; Cincotti, F.; Schalk, G.

    2011-04-01

    A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.

  5. Computational Methods for Configurational Entropy Using Internal and Cartesian Coordinates.

    PubMed

    Hikiri, Simon; Yoshidome, Takashi; Ikeguchi, Mitsunori

    2016-12-13

    The configurational entropy of solute molecules is a crucially important quantity to study various biophysical processes. Consequently, it is necessary to establish an efficient quantitative computational method to calculate configurational entropy as accurately as possible. In the present paper, we investigate the quantitative performance of the quasi-harmonic and related computational methods, including widely used methods implemented in popular molecular dynamics (MD) software packages, compared with the Clausius method, which is capable of accurately computing the change of the configurational entropy upon temperature change. Notably, we focused on the choice of the coordinate systems (i.e., internal or Cartesian coordinates). The Boltzmann-quasi-harmonic (BQH) method using internal coordinates outperformed all the six methods examined here. The introduction of improper torsions in the BQH method improves its performance, and anharmonicity of proper torsions in proteins is identified to be the origin of the superior performance of the BQH method. In contrast, widely used methods implemented in MD packages show rather poor performance. In addition, the enhanced sampling of replica-exchange MD simulations was found to be efficient for the convergent behavior of entropy calculations. Also in folding/unfolding transitions of a small protein, Chignolin, the BQH method was reasonably accurate. However, the independent term without the correlation term in the BQH method was most accurate for the folding entropy among the methods considered in this study, because the QH approximation of the correlation term in the BQH method was no longer valid for the divergent unfolded structures.

  6. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    PubMed

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  7. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    PubMed Central

    Thomson, Robert C.

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  8. Architecture for the Next Generation System Management Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallard, Jerome; Lebre, I Adrien; Morin, Christine

    2011-01-01

    To get more results or greater accuracy, computational scientists execute their applications on distributed computing platforms such as Clusters, Grids and Clouds. These platforms are different in terms of hardware and software resources as well as locality: some span across multiple sites and multiple administrative domains whereas others are limited to a single site/domain. As a consequence, in order to scale their applica- tions up the scientists have to manage technical details for each target platform. From our point of view, this complexity should be hidden from the scientists who, in most cases, would prefer to focus on their researchmore » rather than spending time dealing with platform configuration concerns. In this article, we advocate for a system management framework that aims to automatically setup the whole run-time environment according to the applications needs. The main difference with regards to usual approaches is that they generally only focus on the software layer whereas we address both the hardware and the software expecta- tions through a unique system. For each application, scientists describe their requirements through the definition of a Virtual Platform (VP) and a Virtual System Environment (VSE). Relying on the VP/VSE definitions, the framework is in charge of: (i) the configuration of the physical infrastructure to satisfy the VP requirements, (ii) the setup of the VP, and (iii) the customization of the execution environment (VSE) upon the former VP. We propose a new formalism that the system can rely upon to successfully perform each of these three steps without burdening the user with the specifics of the configuration for the physical resources, and system management tools. This formalism leverages Goldberg s theory for recursive virtual machines by introducing new concepts based on system virtualization (identity, partitioning, aggregation) and emulation (simple, abstraction). This enables the definition of complex VP/VSE configurations without making assumptions about the hardware and the software re- sources. For each requirement, the system executes the corresponding operation with the appropriate management tool. As a proof of concept, we implemented a first prototype that currently interacts with several system management tools (e.g., OSCAR, the Grid 5000 toolkit, and XtreemOS) and that can be easily extended to integrate new resource brokers or cloud systems such as Nimbus, OpenNebula or Eucalyptus for instance.« less

  9. Unidata Cyberinfrastructure in the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Young, J. W.

    2016-12-01

    Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.

  10. Software to Control and Monitor Gas Streams

    NASA Technical Reports Server (NTRS)

    Arkin, C.; Curley, Charles; Gore, Eric; Floyd, David; Lucas, Damion

    2012-01-01

    This software package interfaces with various gas stream devices such as pressure transducers, flow meters, flow controllers, valves, and analyzers such as a mass spectrometer. The software provides excellent user interfacing with various windows that provide time-domain graphs, valve state buttons, priority- colored messages, and warning icons. The user can configure the software to save as much or as little data as needed to a comma-delimited file. The software also includes an intuitive scripting language for automated processing. The configuration allows for the assignment of measured values or calibration so that raw signals can be viewed as usable pressures, flows, or concentrations in real time. The software is based on those used in two safety systems for shuttle processing and one volcanic gas analysis system. Mass analyzers typically have very unique applications and vary from job to job. As such, software available on the market is usually inadequate or targeted on a specific application (such as EPA methods). The goal was to develop powerful software that could be used with prototype systems. The key problem was to generalize the software to be easily and quickly reconfigurable. At Kennedy Space Center (KSC), the prior art consists of two primary methods. The first method was to utilize Lab- VIEW and a commercial data acquisition system. This method required rewriting code for each different application and only provided raw data. To obtain data in engineering units, manual calculations were required. The second method was to utilize one of the embedded computer systems developed for another system. This second method had the benefit of providing data in engineering units, but was limited in the number of control parameters.

  11. Considerations for Software Defined Networking (SDN): Approaches and use cases

    NASA Astrophysics Data System (ADS)

    Bakshi, K.

    Software Defined Networking (SDN) is an evolutionary approach to network design and functionality based on the ability to programmatically modify the behavior of network devices. SDN uses user-customizable and configurable software that's independent of hardware to enable networked systems to expand data flow control. SDN is in large part about understanding and managing a network as a unified abstraction. It will make networks more flexible, dynamic, and cost-efficient, while greatly simplifying operational complexity. And this advanced solution provides several benefits including network and service customizability, configurability, improved operations, and increased performance. There are several approaches to SDN and its practical implementation. Among them, two have risen to prominence with differences in pedigree and implementation. This paper's main focus will be to define, review, and evaluate salient approaches and use cases of the OpenFlow and Virtual Network Overlay approaches to SDN. OpenFlow is a communication protocol that gives access to the forwarding plane of a network's switches and routers. The Virtual Network Overlay relies on a completely virtualized network infrastructure and services to abstract the underlying physical network, which allows the overlay to be mobile to other physical networks. This is an important requirement for cloud computing, where applications and associated network services are migrated to cloud service providers and remote data centers on the fly as resource demands dictate. The paper will discuss how and where SDN can be applied and implemented, including research and academia, virtual multitenant data center, and cloud computing applications. Specific attention will be given to the cloud computing use case, where automated provisioning and programmable overlay for scalable multi-tenancy is leveraged via the SDN approach.

  12. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    NASA Astrophysics Data System (ADS)

    Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-12-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.

  13. Guidance and Control Software Project Data - Volume 4: Configuration Management and Quality Assurance Documents

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes configuration management and quality assurance documents from the GCS project. Volume 4 contains six appendices: A. Software Accomplishment Summary for the Guidance and Control Software Project; B. Software Configuration Index for the Guidance and Control Software Project; C. Configuration Management Records for the Guidance and Control Software Project; D. Software Quality Assurance Records for the Guidance and Control Software Project; E. Problem Report for the Pluto Implementation of the Guidance and Control Software Project; and F. Support Documentation Change Reports for the Guidance and Control Software Project.

  14. Development of a 32-bit UNIX-based ELAS workstation

    NASA Technical Reports Server (NTRS)

    Spiering, Bruce A.; Pearson, Ronnie W.; Cheng, Thomas D.

    1987-01-01

    A mini/microcomputer UNIX-based image analysis workstation has been designed and is being implemented to use the Earth Resources Laboratory Applications Software (ELAS). The hardware system includes a MASSCOMP 5600 computer, which is a 32-bit UNIX-based system (compatible with AT&T System V and Berkeley 4.2 BSD operating system), a floating point accelerator, a 474-megabyte fixed disk, a tri-density magnetic tape drive, and an 1152 by 910 by 12-plane color graphics/image interface. The software conversion includes reconfiguring the ELAs driver Master Task, recompiling and then testing the converted application modules. This hardware and software configuration is a self-sufficient image analysis workstation which can be used as a stand-alone system, or networked with other compatible workstations.

  15. SNL Mechanical Computer Aided Design (MCAD) guide 2007.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Brandon; Pollice, Stephanie L.; Martinez, Jack R.

    2007-12-01

    This document is considered a mechanical design best-practice guide to new and experienced designers alike. The contents consist of topics related to using Computer Aided Design (CAD) software, performing basic analyses, and using configuration management. The details specific to a particular topic have been leveraged against existing Product Realization Standard (PRS) and Technical Business Practice (TBP) requirements while maintaining alignment with sound engineering and design practices. This document is to be considered dynamic in that subsequent updates will be reflected in the main title, and each update will be published on an annual basis.

  16. Assessment of CFD capability for prediction of hypersonic shock interactions

    NASA Astrophysics Data System (ADS)

    Knight, Doyle; Longo, José; Drikakis, Dimitris; Gaitonde, Datta; Lani, Andrea; Nompelis, Ioannis; Reimann, Bodo; Walpot, Louis

    2012-01-01

    The aerothermodynamic loadings associated with shock wave boundary layer interactions (shock interactions) must be carefully considered in the design of hypersonic air vehicles. The capability of Computational Fluid Dynamics (CFD) software to accurately predict hypersonic shock wave laminar boundary layer interactions is examined. A series of independent computations performed by researchers in the US and Europe are presented for two generic configurations (double cone and cylinder) and compared with experimental data. The results illustrate the current capabilities and limitations of modern CFD methods for these flows.

  17. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  18. Automated a complex computer aided design concept generated using macros programming

    NASA Astrophysics Data System (ADS)

    Rizal Ramly, Mohammad; Asrokin, Azharrudin; Abd Rahman, Safura; Zulkifly, Nurul Ain Md

    2013-12-01

    Changing a complex Computer Aided design profile such as car and aircraft surfaces has always been difficult and challenging. The capability of CAD software such as AutoCAD and CATIA show that a simple configuration of a CAD design can be easily modified without hassle, but it is not the case with complex design configuration. Design changes help users to test and explore various configurations of the design concept before the production of a model. The purpose of this study is to look into macros programming as parametric method of the commercial aircraft design. Macros programming is a method where the configurations of the design are done by recording a script of commands, editing the data value and adding a certain new command line to create an element of parametric design. The steps and the procedure to create a macro programming are discussed, besides looking into some difficulties during the process of creation and advantage of its usage. Generally, the advantages of macros programming as a method of parametric design are; allowing flexibility for design exploration, increasing the usability of the design solution, allowing proper contained by the model while restricting others and real time feedback changes.

  19. An Overview of Modifications Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.

    2013-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees of freedom and allows for the calculation of various airplane responses due to a discrete one-minus-cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and output data so as to provide a more useful and accurate tool for gust load analysis. Revisions are made in the categories of aircraft geometry, computation of aerodynamic forces and moments, and implementation of horizontal tail mode shapes. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs in included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  20. Advanced communications technology satellite high burst rate link evaluation terminal power control and rain fade software test plan, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Power Control and Rain Fade Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Power Control and Rain Fade Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Power Control and Rain Fade Software automatically controls the LET uplink power to compensate for signal fades. Besides power augmentation, the C&PM Software system is also responsible for instrument control during HBR-LET experiments, control of the Intermediate Frequency Switch Matrix on board the ACTS to yield a desired path through the spacecraft payload, and data display. The Power Control and Rain Fade Software User's Guide, Version 1.0 outlines the commands and procedures to install and operate the Power Control and Rain Fade Software. The Power Control and Rain Fade Software Maintenance Manual, Version 1.0 is a programmer's guide to the Power Control and Rain Fade Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Power Control and Rain Fade Software, computer algorithms, format representations, and computer hardware configuration. The Power Control and Rain Fade Test Plan provides a step-by-step procedure to verify the operation of the software using a predetermined signal fade event. The Test Plan also provides a means to demonstrate the capability of the software.

  1. Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing

    NASA Technical Reports Server (NTRS)

    Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.

    2010-01-01

    The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.

  2. Computational Ion Optics Design Evaluations

    NASA Technical Reports Server (NTRS)

    Malone, Shane P.; Soulas, George C.

    2004-01-01

    Ion optics computational models are invaluable tools in the design of ion optics systems. In this study a new computational model developed by an outside vendor for use at the NASA Glenn Research Center (GRC) is presented. This computational model is a gun code that has been modified to model the plasma sheaths both upstream and downstream of the ion optics. The model handles multiple species (e.g. singly and doubly-charged ions) and includes a charge-exchange model to support erosion estimations. The model uses commercially developed solid design and meshing software to allow high flexibility in ion optics geometric configurations. The results from this computational model are applied to the NEXT project to investigate the effects of crossover impingement erosion seen during the 2000-hour wear test.

  3. The potential of computer software that supports the diagnosis of workplace ergonomics in shaping health awareness

    NASA Astrophysics Data System (ADS)

    Lubkowska, Wioletta

    2017-11-01

    The growing prevalence of health problems among computer workstation workers has become one of the biggest threats to the overall health of our population. That is why many modern scientists are looking for ways and methods to prevent and reverse these negative trends. The purpose of this article is to present the potential for practical use of computer programs to design an ergonomic workplace and postural loads. These programs help configure the computer workstation correctly and adopt the correct body position during work, which reduces the risk of health problems. Creating visually attractive programs helps encourage and inspire those who work with a computer to introduce ergonomic solutions and reject the sedentary lifestyle.

  4. Influence of RF channels mismatch and mutual coupling phenomenon on performance of a multistatic passive radar

    NASA Astrophysics Data System (ADS)

    Hossa, Robert; Górski, Maksymilian

    2010-09-01

    In the paper we analyze the influence of RF channels mismatch and mutual coupling effect on the performance of the multistatic passive radar with Uniform Circular Array (UCA) configuration. The problem was tested intensively in numerous different scenarios with a reference virtual multistatic passive radar. Finally, exemplary results of the computer software simulations are provided and discussed.

  5. Android application and REST server system for quasar spectrum presentation and analysis

    NASA Astrophysics Data System (ADS)

    Wasiewicz, P.; Pietralik, K.; Hryniewicz, K.

    2017-08-01

    This paper describes the implementation of a system consisting of a mobile application and RESTful architecture server intended for the analysis and presentation of quasars' spectrum. It also depicts the quasar's characteristics and significance to the scientific community, the source for acquiring astronomical objects' spectral data, used software solutions as well as presents the aspect of Cloud Computing and various possible deployment configurations.

  6. Shape resonances of Be- and Mg- investigated with the method of analytic continuation

    NASA Astrophysics Data System (ADS)

    Čurík, Roman; Paidarová, I.; Horáček, J.

    2018-05-01

    The regularized method of analytic continuation is used to study the low-energy negative-ion states of beryllium (configuration 2 s2ɛ p 2P ) and magnesium (configuration 3 s2ɛ p 2P ) atoms. The method applies an additional perturbation potential and requires only routine bound-state multi-electron quantum calculations. Such computations are accessible by most of the free or commercial quantum chemistry software available for atoms and molecules. The perturbation potential is implemented as a spherical Gaussian function with a fixed width. Stability of the analytic continuation technique with respect to the width and with respect to the input range of electron affinities is studied in detail. The computed resonance parameters Er=0.282 eV, Γ =0.316 eV for the 2 p state of Be- and Er=0.188 eV, Γ =0.167 for the 3 p state of Mg- agree well with the best results obtained by much more elaborate and computationally demanding present-day methods.

  7. Design of Remote GPRS-based Gas Data Monitoring System

    NASA Astrophysics Data System (ADS)

    Yan, Xiyue; Yang, Jianhua; Lu, Wei

    2018-01-01

    In order to solve the problem of remote data transmission of gas flowmeter, and realize unattended operation on the spot, an unattended remote monitoring system based on GPRS for gas data is designed in this paper. The slave computer of this system adopts embedded microprocessor to read data of gas flowmeter through rs-232 bus and transfers it to the host computer through DTU. In the host computer, the VB program dynamically binds the Winsock control to receive and parse data. By using dynamic data exchange, the Kingview configuration software realizes history trend curve, real time trend curve, alarm, print, web browsing and other functions.

  8. InSAR Scientific Computing Environment - The Home Stretch

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Gurrola, E. M.; Sacco, G.; Zebker, H. A.

    2011-12-01

    The Interferometric Synthetic Aperture Radar (InSAR) Scientific Computing Environment (ISCE) is a software development effort in its third and final year within the NASA Advanced Information Systems and Technology program. The ISCE is a new computing environment for geodetic image processing for InSAR sensors enabling scientists to reduce measurements directly from radar satellites to new geophysical products with relative ease. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. Upcoming international SAR missions will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment has the functionality to become a key element in processing data from NASA's proposed DESDynI mission into higher level data products, supporting a new class of analyses that take advantage of the long time and large spatial scales of these new data. At the core of ISCE is a new set of efficient and accurate InSAR algorithms. These algorithms are placed into an object-oriented, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models. The environment is designed to easily allow user contributions, enabling an open source community to extend the framework into the indefinite future. ISCE supports data from nearly all of the available satellite platforms, including ERS, EnviSAT, Radarsat-1, Radarsat-2, ALOS, TerraSAR-X, and Cosmo-SkyMed. The code applies a number of parallelization techniques and sensible approximations for speed. It is configured to work on modern linux-based computers with gcc compilers and python. ISCE is now a complete, functional package, under configuration management, and with extensive documentation and tested use cases appropriate to geodetic imaging applications. The software has been tested with canonical simulated radar data ("point targets") as well as with a variety of existing satellite data, cross-compared with other software packages. Its extensibility has already been proven by the straightforward addition of polarimetric processing and calibration, and derived filtering and estimation routines associated with polarimetry that supplement the original InSAR geodetic functionality. As of October 2011, the software is available for non-commercial use through UNAVCO's WinSAR consortium.

  9. Squid - a simple bioinformatics grid.

    PubMed

    Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M

    2005-08-03

    BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.

  10. Managing configuration software of ground software applications with glueware

    NASA Technical Reports Server (NTRS)

    Larsen, B.; Herrera, R.; Sesplaukis, T.; Cheng, L.; Sarrel, M.

    2003-01-01

    This paper reports on a simple, low-cost effort to streamline the configuration of the uplink software tools. Even though the existing ground system consisted of JPL and custom Cassini software rather than COTS, we chose a glueware approach--reintegrating with wrappers and bridges and adding minimal new functionality.

  11. Model development, testing and experimentation in a CyberWorkstation for Brain-Machine Interface research.

    PubMed

    Rattanatamrong, Prapaporn; Matsunaga, Andrea; Raiturkar, Pooja; Mesa, Diego; Zhao, Ming; Mahmoudi, Babak; Digiovanna, Jack; Principe, Jose; Figueiredo, Renato; Sanchez, Justin; Fortes, Jose

    2010-01-01

    The CyberWorkstation (CW) is an advanced cyber-infrastructure for Brain-Machine Interface (BMI) research. It allows the development, configuration and execution of BMI computational models using high-performance computing resources. The CW's concept is implemented using a software structure in which an "experiment engine" is used to coordinate all software modules needed to capture, communicate and process brain signals and motor-control commands. A generic BMI-model template, which specifies a common interface to the CW's experiment engine, and a common communication protocol enable easy addition, removal or replacement of models without disrupting system operation. This paper reviews the essential components of the CW and shows how templates can facilitate the processes of BMI model development, testing and incorporation into the CW. It also discusses the ongoing work towards making this process infrastructure independent.

  12. VALIDATION OF ANSYS FINITE ELEMENT ANALYSIS SOFTWARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HAMM, E.R.

    2003-06-27

    This document provides a record of the verification and Validation of the ANSYS Version 7.0 software that is installed on selected CH2M HILL computers. The issues addressed include: Software verification, installation, validation, configuration management and error reporting. The ANSYS{reg_sign} computer program is a large scale multi-purpose finite element program which may be used for solving several classes of engineering analysis. The analysis capabilities of ANSYS Full Mechanical Version 7.0 installed on selected CH2M Hill Hanford Group (CH2M HILL) Intel processor based computers include the ability to solve static and dynamic structural analyses, steady-state and transient heat transfer problems, mode-frequency andmore » buckling eigenvalue problems, static or time-varying magnetic analyses and various types of field and coupled-field applications. The program contains many special features which allow nonlinearities or secondary effects to be included in the solution, such as plasticity, large strain, hyperelasticity, creep, swelling, large deflections, contact, stress stiffening, temperature dependency, material anisotropy, and thermal radiation. The ANSYS program has been in commercial use since 1970, and has been used extensively in the aerospace, automotive, construction, electronic, energy services, manufacturing, nuclear, plastics, oil and steel industries.« less

  13. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    PubMed

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  14. A self-configuring control system for storage and computing departments at INFN-CNAF Tierl

    NASA Astrophysics Data System (ADS)

    Gregori, Daniele; Dal Pra, Stefano; Ricci, Pier Paolo; Pezzi, Michele; Prosperini, Andrea; Sapunenko, Vladimir

    2015-05-01

    The storage and farming departments at the INFN-CNAF Tier1[1] manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers between LHC Tiers sites (Tier0, Tier1 and Tier2) via GridFTP cluster and Xrootd protocol and finally the writing and reading data operations on magnetic tape backend. One of the most important and essential point in order to get a reliable service is a control system that can warn if problems arise and which is able to perform automatic recovery operations in case of service interruptions or major failures. Moreover, during daily operations the configurations can change, i.e. if the GPFS cluster nodes roles can be modified and therefore the obsolete nodes must be removed from the control system production, and the new servers should be added to the ones that are already present. The manual management of all these changes is an operation that can be somewhat difficult in case of several changes, it can also take a long time and is easily subject to human error or misconfiguration. For these reasons we have developed a control system with the feature of self-configure itself if any change occurs. Currently, this system has been in production for about a year at the INFN-CNAF Tier1 with good results and hardly any major drawback. There are three major key points in this system. The first is a software configurator service (e.g. Quattor or Puppet) for the servers machines that we want to monitor with the control system; this service must ensure the presence of appropriate sensors and custom scripts on the nodes to check and should be able to install and update software packages on them. The second key element is a database containing information, according to a suitable format, on all the machines in production and able to provide for each of them the principal information such as the type of hardware, the network switch to which the machine is connected, if the machine is real (physical) or virtual, the possible hypervisor to which it belongs and so on. The last key point is a control system software (in our implementation we choose the Nagios software), capable of assessing the status of the servers and services, and that can attempt to restore the working state, restart or inhibit software services and send suitable alarm messages to the site administrators. The integration of these three elements was made by appropriate scripts and custom implementation that allow the self-configuration of the system according to a decisional logic and the whole combination of all the above-mentioned components will be deeply discussed in this paper.

  15. Reconfigurable Hardware Adapts to Changing Mission Demands

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A new class of computing architectures and processing systems, which use reconfigurable hardware, is creating a revolutionary approach to implementing future spacecraft systems. With the increasing complexity of electronic components, engineers must design next-generation spacecraft systems with new technologies in both hardware and software. Derivation Systems, Inc., of Carlsbad, California, has been working through NASA s Small Business Innovation Research (SBIR) program to develop key technologies in reconfigurable computing and Intellectual Property (IP) soft cores. Founded in 1993, Derivation Systems has received several SBIR contracts from NASA s Langley Research Center and the U.S. Department of Defense Air Force Research Laboratories in support of its mission to develop hardware and software for high-assurance systems. Through these contracts, Derivation Systems began developing leading-edge technology in formal verification, embedded Java, and reconfigurable computing for its PF3100, Derivational Reasoning System (DRS ), FormalCORE IP, FormalCORE PCI/32, FormalCORE DES, and LavaCORE Configurable Java Processor, which are designed for greater flexibility and security on all space missions.

  16. Development of new data acquisition system for COMPASS experiment

    NASA Astrophysics Data System (ADS)

    Bodlak, M.; Frolov, V.; Jary, V.; Huber, S.; Konorov, I.; Levit, D.; Novy, J.; Salac, R.; Virius, M.

    2016-04-01

    This paper presents development and recent status of the new data acquisiton system of the COMPASS experiment at CERN with up to 50 kHz trigger rate and 36 kB average event size during 10 second period with beam followed by approximately 40 second period without beam. In the original DAQ, the event building is performed by software deployed on switched computer network, moreover the data readout is based on deprecated PCI technology; the new system replaces the event building network with a custom FPGA-based hardware. The custom cards are introduced and advantages of the FPGA technology for DAQ related tasks are discussed. In this paper, we focus on the software part that is mainly responsible for control and monitoring. The most of the system can run as slow control; only readout process has realtime requirements. The design of the software is built on state machines that are implemented using the Qt framework; communication between remote nodes that form the software architecture is based on the DIM library and IPBus technology. Furthermore, PHP and JS languages are used to maintain system configuration; the MySQL database was selected as storage for both configuration of the system and system messages. The system has been design with maximum throughput of 1500 MB/s and large buffering ability used to spread load on readout computers over longer period of time. Great emphasis is put on data latency, data consistency, and even timing checks which are done at each stage of event assembly. System collects results of these checks which together with special data format allows the software to localize origin of problems in data transmission process. A prototype version of the system has already been developed and tested the new system fulfills all given requirements. It is expected that the full-scale version of the system will be finalized in June 2014 and deployed on September provided that tests with cosmic run succeed.

  17. Robust Duplication with Comparison Methods in Microcontrollers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather Marie; Baker, Zachary Kent; Fairbanks, Thomas D.

    Commercial microprocessors could be useful computational platforms in space systems, as long as the risk is bound. Many spacecraft are computationally constrained because all of the computation is done on a single radiation-hardened microprocessor. It is possible that a commercial microprocessor could be used for configuration, monitoring and background tasks that are not mission critical. Most commercial microprocessors are affected by radiation, including single-event effects (SEEs) that could be destructive to the component or corrupt the data. Part screening can help designers avoid components with destructive failure modes, and mitigation can suppress data corruption. We have been experimenting with amore » method for masking radiation-induced faults through the software executing on the microprocessor. While triple-modular redundancy (TMR) techniques are very effective at masking faults in software, the increased amount of execution time to complete the computation is not desirable. Here in this article we present a technique for combining duplication with compare (DWC) with TMR that decreases observable errors by as much as 145 times with only a 2.35 time decrease in performance.« less

  18. Robust Duplication with Comparison Methods in Microcontrollers

    DOE PAGES

    Quinn, Heather Marie; Baker, Zachary Kent; Fairbanks, Thomas D.; ...

    2016-01-01

    Commercial microprocessors could be useful computational platforms in space systems, as long as the risk is bound. Many spacecraft are computationally constrained because all of the computation is done on a single radiation-hardened microprocessor. It is possible that a commercial microprocessor could be used for configuration, monitoring and background tasks that are not mission critical. Most commercial microprocessors are affected by radiation, including single-event effects (SEEs) that could be destructive to the component or corrupt the data. Part screening can help designers avoid components with destructive failure modes, and mitigation can suppress data corruption. We have been experimenting with amore » method for masking radiation-induced faults through the software executing on the microprocessor. While triple-modular redundancy (TMR) techniques are very effective at masking faults in software, the increased amount of execution time to complete the computation is not desirable. Here in this article we present a technique for combining duplication with compare (DWC) with TMR that decreases observable errors by as much as 145 times with only a 2.35 time decrease in performance.« less

  19. Personal computer security: part 1. Firewalls, antivirus software, and Internet security suites.

    PubMed

    Caruso, Ronald D

    2003-01-01

    Personal computer (PC) security in the era of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) involves two interrelated elements: safeguarding the basic computer system itself and protecting the information it contains and transmits, including personal files. HIPAA regulations have toughened the requirements for securing patient information, requiring every radiologist with such data to take further precautions. Security starts with physically securing the computer. Account passwords and a password-protected screen saver should also be set up. A modern antivirus program can easily be installed and configured. File scanning and updating of virus definitions are simple processes that can largely be automated and should be performed at least weekly. A software firewall is also essential for protection from outside intrusion, and an inexpensive hardware firewall can provide yet another layer of protection. An Internet security suite yields additional safety. Regular updating of the security features of installed programs is important. Obtaining a moderate degree of PC safety and security is somewhat inconvenient but is necessary and well worth the effort. Copyright RSNA, 2003

  20. The Intelligent Flight Control Program (IFCS)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is the closeout report for the Research Cooperative Agreement NCC4-00130 of accomplishments for the Intelligent Flight Control System (IFCS) Project. It has been a pleasure working with NASA and NASA partners as we strive to meet the goals of this research initiative. ISR was engaged in this Research Cooperative Agreement beginning 01 January 2003 and ending 31 January 2004. During this time ISR conducted efforts towards development of the ARTS II Computer Software Configuration Item (CSCI) version 4.0 by performing or developing the following: 1) Requirements Definition; 2) Software Design and Development; 3) Hardware In the Loop Simulation; 4) Unit Level testing; 5) Documentation.

  1. Review of the Water Resources Information System of Argentina

    USGS Publications Warehouse

    Hutchison, N.E.

    1987-01-01

    A representative of the U.S. Geological Survey traveled to Buenos Aires, Argentina, in November 1986, to discuss water information systems and data bank implementation in the Argentine Government Center for Water Resources Information. Software has been written by Center personnel for a minicomputer to be used to manage inventory (index) data and water quality data. Additional hardware and software have been ordered to upgrade the existing computer. Four microcomputers, statistical and data base management software, and network hardware and software for linking the computers have also been ordered. The Center plans to develop a nationwide distributed data base for Argentina that will include the major regional offices as nodes. Needs for continued development of the water resources information system for Argentina were reviewed. Identified needs include: (1) conducting a requirements analysis to define the content of the data base and insure that all user requirements are met, (2) preparing a plan for the development, implementation, and operation of the data base, and (3) developing a conceptual design to inform all development personnel and users of the basic functionality planned for the system. A quality assurance and configuration management program to provide oversight to the development process was also discussed. (USGS)

  2. A resilient and secure software platform and architecture for distributed spacecraft

    NASA Astrophysics Data System (ADS)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  3. Control and Information Systems for the National Ignition Facility

    DOE PAGES

    Brunton, Gordon; Casey, Allan; Christensen, Marvin; ...

    2017-03-23

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  4. Control and Information Systems for the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Gordon; Casey, Allan; Christensen, Marvin

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  5. Thermal Analysis on Plume Heating of the Main Engine on the Crew Exploration Vehicle Service Module

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Yuko, James R.

    2007-01-01

    The crew exploration vehicle (CEV) service module (SM) main engine plume heating is analyzed using multiple numerical tools. The chemical equilibrium compositions and applications (CEA) code is used to compute the flow field inside the engine nozzle. The plume expansion into ambient atmosphere is simulated using an axisymmetric space-time conservation element and solution element (CE/SE) Euler code, a computational fluid dynamics (CFD) software. The thermal analysis including both convection and radiation heat transfers from the hot gas inside the engine nozzle and gas radiation from the plume is performed using Thermal Desktop. Three SM configurations, Lockheed Martin (LM) designed 604, 605, and 606 configurations, are considered. Design of multilayer insulation (MLI) for the stowed solar arrays, which is subject to plume heating from the main engine, among the passive thermal control system (PTCS), are proposed and validated.

  6. Building confidence and credibility amid growing model and computing complexity

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Mahajan, S.; Veneziani, C.; Kennedy, J. H.

    2017-12-01

    As global Earth system models are developed to answer an ever-wider range of science questions, software products that provide robust verification, validation, and evaluation must evolve in tandem. Measuring the degree to which these new models capture past behavior, predict the future, and provide the certainty of predictions is becoming ever more challenging for reasons that are generally well known, yet are still challenging to address. Two specific and divergent needs for analysis of the Accelerated Climate Model for Energy (ACME) model - but with a similar software philosophy - are presented to show how a model developer-based focus can address analysis needs during expansive model changes to provide greater fidelity and execute on multi-petascale computing facilities. A-PRIME is a python script-based quick-look overview of a fully-coupled global model configuration to determine quickly if it captures specific behavior before significant computer time and expense is invested. EVE is an ensemble-based software framework that focuses on verification of performance-based ACME model development, such as compiler or machine settings, to determine the equivalence of relevant climate statistics. The challenges and solutions for analysis of multi-petabyte output data are highlighted from the aspect of the scientist using the software, with the aim of fostering discussion and further input from the community about improving developer confidence and community credibility.

  7. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system. This contrasts with practices in which code for distributing of compute tasks is mixed with model-specific code, and results in a better maintainable model. For flexibility and efficiency, the algorithms are configurable at compile-time with the respect to the following aspects: data type, value type, no-data handling, input value domain handling, and output value range handling. This makes the algorithms usable in very different contexts, without the need for making intrusive changes to existing models when using them. Applications that benefit from using the Fern library include the construction of forward simulation models in (global) hydrology (e.g. PCR-GLOBWB (Van Beek et al. 2011)), ecology, geomorphology, or land use change (e.g. PLUC (Verstegen et al. 2014)) and manipulation of hyper-resolution land surface data such as digital elevation models and remote sensing data. Using the Fern library, we have also created an add-on to the PCRaster Python Framework (Karssenberg et al. 2010) allowing its users to speed up their spatio-temporal models, sometimes by changing just a single line of Python code in their model. In our presentation we will give an overview of the design of the algorithms, providing examples of different contexts where they can be used to replace existing sequential algorithms, including the PCRaster environmental modeling software (www.pcraster.eu). We will show how the algorithms can be configured to behave differently when necessary. References Karssenberg, D., Schmitz, O., Salamon, P., De Jong, K. and Bierkens, M.F.P., 2010, A software framework for construction of process-based stochastic spatio-temporal models and data assimilation. Environmental Modelling & Software, 25, pp. 489-502, Link. Best Paper Award 2010: Software and Decision Support. Van Beek, L. P. H., Y. Wada, and M. F. P. Bierkens. 2011. Global monthly water stress: 1. Water balance and water availability. Water Resources Research. 47. Verstegen, J. A., D. Karssenberg, F. van der Hilst, and A. P. C. Faaij. 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53:121-136.

  8. R&D100: Lightweight Distributed Metric Service

    ScienceCinema

    Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike

    2018-06-12

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  9. R&D100: Lightweight Distributed Metric Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann; Brandt, Jim; Tucker, Tom

    2015-11-19

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  10. Setting Up Git Software Tool on Linux | High-Performance Computing | NREL

    Science.gov Websites

    system. Before you can get started using the github.nrel.gov git repos, you'll have to do some basic shell (SSH) keys created on those systems. If this is the case, for more information, see using the git Steps - Using a Remote Git Repository Now you have all the basic configuration for using git with a

  11. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, volume 2, part 1. Appendix A: Software documentation

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.

    1982-01-01

    Documentation of the preliminary software developed as a framework for a generalized integrated robotic system simulation is presented. The program structure is composed of three major functions controlled by a program executive. The three major functions are: system definition, analysis tools, and post processing. The system definition function handles user input of system parameters and definition of the manipulator configuration. The analysis tools function handles the computational requirements of the program. The post processing function allows for more detailed study of the results of analysis tool function executions. Also documented is the manipulator joint model software to be used as the basis of the manipulator simulation which will be part of the analysis tools capability.

  12. VIDANA: Data Management System for Nano Satellites

    NASA Astrophysics Data System (ADS)

    Montenegro, Sergio; Walter, Thomas; Dilger, Erik

    2013-08-01

    A Vidana data management system is a network of software and hardware components. This implies a software network, a hardware network and a smooth connection between both of them. Our strategy is based on our innovative middleware. A reliable interconnection network (SW & HW) which can interconnect many unreliable redundant components such as sensors, actuators, communication devices, computers, and storage elements,... and software components! Component failures are detected, the affected device is disabled and its function is taken over by a redundant component. Our middleware doesn't connect only software, but also devices and software together. Software and hardware communicate with each other without having to distinguish which functions are in software and which are implemented in hardware. Components may be turned on and off at any time, and the whole system will autonomously adapt to its new configuration in order to continue fulfilling its task. In VIDANA we aim dynamic adaptability (run tine), static adaptability (tailoring), and unified HW/SW communication protocols. For many of these aspects we use "learn from the nature" where we can find astonishing reference implementations.

  13. A reconfigurable visual-programming library for real-time closed-loop cellular electrophysiology

    PubMed Central

    Biró, István; Giugliano, Michele

    2015-01-01

    Most of the software platforms for cellular electrophysiology are limited in terms of flexibility, hardware support, ease of use, or re-configuration and adaptation for non-expert users. Moreover, advanced experimental protocols requiring real-time closed-loop operation to investigate excitability, plasticity, dynamics, are largely inaccessible to users without moderate to substantial computer proficiency. Here we present an approach based on MATLAB/Simulink, exploiting the benefits of LEGO-like visual programming and configuration, combined to a small, but easily extendible library of functional software components. We provide and validate several examples, implementing conventional and more sophisticated experimental protocols such as dynamic-clamp or the combined use of intracellular and extracellular methods, involving closed-loop real-time control. The functionality of each of these examples is demonstrated with relevant experiments. These can be used as a starting point to create and support a larger variety of electrophysiological tools and methods, hopefully extending the range of default techniques and protocols currently employed in experimental labs across the world. PMID:26157385

  14. An integrated dexterous robotic testbed for space applications

    NASA Technical Reports Server (NTRS)

    Li, Larry C.; Nguyen, Hai; Sauer, Edward

    1992-01-01

    An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.

  15. Overview of Transonic to Hypersonic Stage Separation Tool Development for Multi-Stage-to-Orbit Concepts

    NASA Technical Reports Server (NTRS)

    Murphy, Kelly J.; Bunning, Pieter G.; Pamadi, Bandu N.; Scallion, William I.; Jones, Kenneth M.

    2004-01-01

    An overview of research efforts at NASA in support of the stage separation and ascent aerothermodynamics research program is presented. The objective of this work is to develop a synergistic suite of experimental, computational, and engineering tools and methods to apply to vehicle separation across the transonic to hypersonic speed regimes. Proximity testing of a generic bimese wing-body configuration is on-going in the transonic (Mach numbers 0.6, 1.05, and 1.1), supersonic (Mach numbers 2.3, 3.0, and 4.5) and hypersonic (Mach numbers 6 and 10) speed regimes in four wind tunnel facilities at the NASA Langley Research Center. An overset grid, Navier-Stokes flow solver has been enhanced and demonstrated on a matrix of proximity cases and on a dynamic separation simulation of the bimese configuration. Steady-state predictions with this solver were in excellent agreement with wind tunnel data at Mach 3 as were predictions via a Cartesian-grid Euler solver. Experimental and computational data have been used to evaluate multi-body enhancements to the widely-used Aerodynamic Preliminary Analysis System, an engineering methodology, and to develop a new software package, SepSim, for the simulation and visualization of vehicle motions in a stage separation scenario. Web-based software will be used for archiving information generated from this research program into a database accessible to the user community. Thus, a framework has been established to study stage separation problems using coordinated experimental, computational, and engineering tools.

  16. Architecture and Implementation of OpenPET Firmware and Embedded Software

    PubMed Central

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; Peng, Qiyu; Choong, Woon-Seng

    2016-01-01

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the flexibility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics – a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures and implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration. PMID:27110034

  17. PC-PVT 2.0: An updated platform for psychomotor vigilance task testing, analysis, prediction, and visualization.

    PubMed

    Reifman, Jaques; Kumar, Kamal; Khitrov, Maxim Y; Liu, Jianbo; Ramakrishnan, Sridhar

    2018-07-01

    The psychomotor vigilance task (PVT) has been widely used to assess the effects of sleep deprivation on human neurobehavioral performance. To facilitate research in this field, we previously developed the PC-PVT, a freely available software system analogous to the "gold-standard" PVT-192 that, in addition to allowing for simple visual reaction time (RT) tests, also allows for near real-time PVT analysis, prediction, and visualization in a personal computer (PC). Here we present the PC-PVT 2.0 for Windows 10 operating system, which has the capability to couple PVT tests of a study protocol with the study's sleep/wake and caffeine schedules, and make real-time individualized predictions of PVT performance for such schedules. We characterized the accuracy and precision of the software in measuring RT, using 44 distinct combinations of PC hardware system configurations. We found that 15 system configurations measured RTs with an average delay of less than 10 ms, an error comparable to that of the PVT-192. To achieve such small delays, the system configuration should always use a gaming mouse as the means to respond to visual stimuli. We recommend using a discrete graphical processing unit for desktop PCs and an external monitor for laptop PCs. This update integrates a study's sleep/wake and caffeine schedules with the testing software, facilitating testing and outcome visualization, and provides near-real-time individualized PVT predictions for any sleep-loss condition considering caffeine effects. The software, with its enhanced PVT analysis, visualization, and prediction capabilities, can be freely downloaded from https://pcpvt.bhsai.org. Published by Elsevier B.V.

  18. Table-driven configuration and formatting of telemetry data in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Manning, Evan

    1994-01-01

    With a restructured software architecture for telemetry system control and data processing, the NASA/Deep Space Network (DSN) has substantially improved its ability to accommodate a wide variety of spacecraft in an era of 'better, faster, cheaper'. In the new architecture, the permanent software implements all capabilities needed by any system user, and text tables specify how these capabilities are to be used for each spacecraft. Most changes can now be made rapidly, outside of the traditional software development cycle. The system can be updated to support a new spacecraft through table changes rather than software changes, reducing the implementation, test, and delivery cycle for such a change from three months to three weeks. The mechanical separation of the text table files from the program software, with tables only loaded into memory when that mission is being supported, dramatically reduces the level of regression testing required. The format of each table is a different compromise between ease of human interpretation, efficiency of computer interpretation, and flexibility.

  19. Computer hardware and software for robotic control

    NASA Technical Reports Server (NTRS)

    Davis, Virgil Leon

    1987-01-01

    The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.

  20. Experimental system for computer network via satellite /CS/. III - Network control processor

    NASA Astrophysics Data System (ADS)

    Kakinuma, Y.; Ito, A.; Takahashi, H.; Uchida, K.; Matsumoto, K.; Mitsudome, H.

    1982-03-01

    A network control processor (NCP) has the functions of generating traffics, the control of links and the control of transmitting bursts. The NCP executes protocols, monitors of experiments, gathering and compiling data of measurements, of which programs are loaded on a minicomputer (MELCOM 70/40) with 512KB of memories. The NCP acts as traffic generators, instead of a host computer, in the experiment. For this purpose, 15 fake stations are realized by the software in each user station. This paper describes the configuration of the NCP and the implementation of the protocols for the experimental system.

  1. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  2. The Multi-Attribute Task Battery II (MATB-II) Software for Human Performance and Workload Research: A User's Guide

    NASA Technical Reports Server (NTRS)

    Santiago-Espada, Yamira; Myer, Robert R.; Latorella, Kara A.; Comstock, James R., Jr.

    2011-01-01

    The Multi-Attribute Task Battery (MAT Battery). is a computer-based task designed to evaluate operator performance and workload, has been redeveloped to operate in Windows XP Service Pack 3, Windows Vista and Windows 7 operating systems.MATB-II includes essentially the same tasks as the original MAT Battery, plus new configuration options including a graphical user interface for controlling modes of operation. MATB-II can be executed either in training or testing mode, as defined by the MATB-II configuration file. The configuration file also allows set up of the default timeouts for the tasks, the flow rates of the pumps and tank levels of the Resource Management (RESMAN) task. MATB-II comes with a default event file that an experimenter can modify and adapt

  3. Software control and system configuration management - A process that works

    NASA Technical Reports Server (NTRS)

    Petersen, K. L.; Flores, C., Jr.

    1983-01-01

    A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.

  4. Software control and system configuration management: A systems-wide approach

    NASA Technical Reports Server (NTRS)

    Petersen, K. L.; Flores, C., Jr.

    1984-01-01

    A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.

  5. Computations on Wings With Full-Span Oscillating Control Surfaces Using Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2013-01-01

    A dual-level parallel procedure is presented for computing large databases to support aerospace vehicle design. This procedure has been developed as a single Unix script within the Parallel Batch Submission environment utilizing MPIexec and runs MPI based analysis software. It has been developed to provide a process for aerospace designers to generate data for large numbers of cases with the highest possible fidelity and reasonable wall clock time. A single job submission environment has been created to avoid keeping track of multiple jobs and the associated system administration overhead. The process has been demonstrated for computing large databases for the design of typical aerospace configurations, a launch vehicle and a rotorcraft.

  6. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, G.P.; Skeate, M.F.

    1996-10-15

    An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

  7. CernVM WebAPI - Controlling Virtual Machines from the Web

    NASA Astrophysics Data System (ADS)

    Charalampidis, I.; Berzano, D.; Blomer, J.; Buncic, P.; Ganis, G.; Meusel, R.; Segal, B.

    2015-12-01

    Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.

  8. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Badger, W.; Beckman, C. S.; Beshers, G.; Hammerslag, D.; Kimball, J.; Kirslis, P. A.; Render, H.; Richards, P.; Terwilliger, R.

    1984-01-01

    The project to automate the management of software production systems is described. The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. Several major components of the SAGA system are completed to prototype form. The construction methods are described.

  9. Mechanical System Analysis/Design Tool (MSAT) Quick Guide

    NASA Technical Reports Server (NTRS)

    Lee, HauHua; Kolb, Mark; Madelone, Jack

    1998-01-01

    MSAT is a unique multi-component multi-disciplinary tool that organizes design analysis tasks around object-oriented representations of configuration components, analysis programs and modules, and data transfer links between them. This creative modular architecture enables rapid generation of input stream for trade-off studies of various engine configurations. The data transfer links automatically transport output from one application as relevant input to the next application once the sequence is set up by the user. The computations are managed via constraint propagation - the constraints supplied by the user as part of any optimization module. The software can be used in the preliminary design stage as well as during the detail design of product development process.

  10. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  11. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    PubMed

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  12. Software Manages Documentation in a Large Test Facility

    NASA Technical Reports Server (NTRS)

    Gurneck, Joseph M.

    2001-01-01

    The 3MCS computer program assists and instrumentation engineer in performing the 3 essential functions of design, documentation, and configuration management of measurement and control systems in a large test facility. Services provided by 3MCS are acceptance of input from multiple engineers and technicians working at multiple locations;standardization of drawings;automated cross-referencing; identification of errors;listing of components and resources; downloading of test settings; and provision of information to customers.

  13. 76 FR 12617 - Airworthiness Directives; The Boeing Company Model 777-200 and -300 Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-08

    ... installing new operational software for the electrical load management system and configuration database... the electrical load management system operational software and configuration database software, in... Management, P.O. Box 3707, MC 2H-65, Seattle, Washington 98124-2207; telephone 206- 544-5000, extension 1...

  14. Patient-specific stress analyses in the ascending thoracic aorta using a finite-element implementation of the constrained mixture theory.

    PubMed

    Mousavi, S Jamaleddin; Avril, Stéphane

    2017-10-01

    It is now a rather common approach to perform patient-specific stress analyses of arterial walls using finite-element models reconstructed from gated medical images. However, this requires to compute for every Gauss point the deformation gradient between the current configuration and a stress-free reference configuration. It is technically difficult to define such a reference configuration, and there is actually no guarantee that a stress-free configuration is physically attainable due to the presence of internal stresses in unloaded soft tissues. An alternative framework was proposed by Bellini et al. (Ann Biomed Eng 42(3):488-502, 2014). It consists of computing the deformation gradients between the current configuration and a prestressed reference configuration. We present here the first finite-element results based on this concept using the Abaqus software. The reference configuration is set arbitrarily to the in vivo average geometry of the artery, which is obtained from gated medical images and is assumed to be mechanobiologically homeostatic. For every Gauss point, the stress is split additively into the contributions of each individual load-bearing constituent of the tissue, namely elastin, collagen, smooth muscle cells. Each constituent is assigned an independent prestretch in the reference configuration, named the deposition stretch. The outstanding advantage of the present approach is that it simultaneously computes the in situ stresses existing in the reference configuration and predicts the residual stresses that occur after removing the different loadings applied onto the artery (pressure and axial load). As a proof of concept, we applied it on an ideal thick-wall cylinder and showed that the obtained results were consistent with corresponding experimental and analytical results of the well-known literature. In addition, we developed a patient-specific model of a human ascending thoracic aneurysmal aorta and demonstrated the utility in predicting the wall stress distribution in vivo under the effects of physiological pressure. Finally, we simulated the whole process preceding traditional in vitro uniaxial tensile testing of arteries, including excision from the body, radial cutting, flattening and subsequent tensile loading, showing how this process may impact the final mechanical properties derived from these in vitro tests.

  15. Towards easing the configuration and new team member accommodation for open source software based portals

    NASA Astrophysics Data System (ADS)

    Fu, L.; West, P.; Zednik, S.; Fox, P. A.

    2013-12-01

    For simple portals such as vocabulary based services, which contain small amounts of data and require only hyper-textual representation, it is often an overkill to adopt the whole software stack of database, middleware and front end, or to use a general Web development framework as the starting point of development. Directly combining open source software is a much more favorable approach. However, our experience with the Coastal and Marine Spatial Planning Vocabulary (CMSPV) service portal shows that there are still issues such as system configuration and accommodating a new team member that need to be handled carefully. In this contribution, we share our experience in the context of the CMSPV portal, and focus on the tools and mechanisms we've developed to ease the configuration job and the incorporation process of new project members. We discuss the configuration issues that arise when we don't have complete control over how the software in use is configured and need to follow existing configuration styles that may not be well documented, especially when multiple pieces of such software need to work together as a combined system. As for the CMSPV portal, it is built on two pieces of open source software that are still under rapid development: a Fuseki data server and Epimorphics Linked Data API (ELDA) front end. Both lack mature documentation and tutorials. We developed comparison and labeling tools to ease the problem of system configuration. Another problem that slowed down the project is that project members came and went during the development process, so new members needed to start with a partially configured system and incomplete documentation left by old members. We developed documentation/tutorial maintenance mechanisms based on our comparison and labeling tools to make it easier for the new members to be incorporated into the project. These tools and mechanisms also provided benefit to other projects that reused the software components from the CMSPV system.

  16. Measure Guideline: Optimizing the Configuration of Flexible Duct Junction Boxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beach, R.; Burdick, A.

    2014-03-01

    This measure guideline offers additional recommendations to heating, ventilation, and air conditioning (HVAC) system designers for optimizing flexible duct, constant-volume HVAC systems using junction boxes within Air Conditioning Contractors of America (ACCA) Manual D guidance. IBACOS used computational fluid dynamics software to explore and develop guidance to better control the airflow effects of factors that may impact pressure losses within junction boxes among various design configurations. These recommendations can help to ensure that a system aligns more closely with the design and the occupants' comfort expectations. Specifically, the recommendations described herein show how to configure a rectangular box with fourmore » outlets, a triangular box with three outlets, metal wyes with two outlets, and multiple configurations for more than four outlets. Designers of HVAC systems, contractors who are fabricating junction boxes on site, and anyone using the ACCA Manual D process for sizing duct runs will find this measure guideline invaluable for more accurately minimizing pressure losses when using junction boxes with flexible ducts.« less

  17. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting.

    PubMed

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.

  18. OpenPET Hardware, Firmware, Software, and Board Design Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Nimeh, Faisal; Choong, Woon-Sengq; Moses, William W.

    OpenPET is an open source, flexible, high-performance, and modular data acquisition system for a variety of applications. The OpenPET electronics are capable of reading analog voltage or current signals from a wide variety of sensors. The electronics boards make extensive use of field programmable gate arrays (FPGAs) to provide flexibility and scalability. Firmware and software for the FPGAs and computer are used to control and acquire data from the system. The command and control flow is similar to the data flow, however, the commands are initiated from the computer similar to a tree topology (i.e., from top-to-bottom). Each node inmore » the tree discovers its parent and children, and all addresses are configured accordingly. A user (or a script) initiates a command from the computer. This command will be translated and encoded to the corresponding child (e.g., SB, MB, DB, etc.). Consecutively, each node will pass the command to its corresponding child(ren) by looking at the destination address. Finally, once the command reaches its desired destination(s) the corresponding node(s) execute(s) the command and send(s) a reply, if required. All the firmware, software, and the electronics board design files are distributed through the OpenPET website (http://openpet.lbl.gov).« less

  19. Evolution of the Hubble Space Telescope Safing Systems

    NASA Technical Reports Server (NTRS)

    Pepe, Joyce; Myslinski, Michael

    2006-01-01

    The Hubble Space Telescope (HST) was launched on April 24 1990, with an expected lifespan of 15 years. Central to the spacecraft design was the concept of a series of on-orbit shuttle servicing missions permitting astronauts to replace failed equipment, update the scientific instruments and keep the HST at the forefront of astronomical discoveries. One key to the success of the Hubble mission has been the robust Safing systems designed to monitor the performance of the observatory and to react to keep the spacecraft safe in the event of equipment anomaly. The spacecraft Safing System consists of a range of software tests in the primary flight computer that evaluate the performance of mission critical hardware, safe modes that are activated when the primary control mode is deemed inadequate for protecting the vehicle, and special actions that the computer can take to autonomously reconfigure critical hardware. The HST Safing System was structured to autonomously detect electrical power system, data management system, and pointing control system malfunctions and to configure the vehicle to ensure safe operation without ground intervention for up to 72 hours. There is also a dedicated safe mode computer that constantly monitors a keep-alive signal from the primary computer. If this signal stops, the safe mode computer shuts down the primary computer and takes over control of the vehicle, putting it into a safe, low-power configuration. The HST Safing system has continued to evolve as equipment has aged, as new hardware has been installed on the vehicle, and as the operation modes have matured during the mission. Along with the continual refinement of the limits used in the safing tests, several new tests have been added to the monitoring system, and new safe modes have been added to the flight software. This paper will focus on the evolution of the HST Safing System and Safing tests, and the importance of this evolution to prolonging the science operations of the telescope.

  20. Automation of Precise Time Reference Stations (PTRS)

    NASA Astrophysics Data System (ADS)

    Wheeler, P. J.

    1985-04-01

    The U.S. Naval Observatory is presently engaged in a program of automating precise time stations (PTS) and precise time reference stations (PTBS) by using a versatile mini-computer controlled data acquisition system (DAS). The data acquisition system is configured to monitor locally available PTTI signals such as LORAN-C, OMEGA, and/or the Global Positioning System. In addition, the DAS performs local standard intercomparison. Computer telephone communications provide automatic data transfer to the Naval Observatory. Subsequently, after analysis of the data, results and information can be sent back to the precise time reference station to provide automatic control of remote station timing. The DAS configuration is designed around state of the art standard industrial high reliability modules. The system integration and software are standardized but allow considerable flexibility to satisfy special local requirements such as stability measurements, performance evaluation and printing of messages and certificates. The DAS operates completely independently and may be queried or controlled at any time with a computer or terminal device (control is protected for use by authorized personnel only). Such DAS equipped PTS are operational in Hawaii, California, Texas and Florida.

  1. An Execution Service for Grid Computing

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Hu, Chaumin

    2004-01-01

    This paper describes the design and implementation of the IPG Execution Service that reliably executes complex jobs on a computational grid. Our Execution Service is part of the IPG service architecture whose goal is to support location-independent computing. In such an environment, once n user ports an npplicntion to one or more hardware/software platfrms, the user can describe this environment to the grid the grid can locate instances of this platfrm, configure the platfrm as required for the application, and then execute the application. Our Execution Service runs jobs that set up such environments for applications and executes them. These jobs consist of a set of tasks for executing applications and managing data. The tasks have user-defined starting conditions that allow users to specih complex dependencies including task to execute when tasks fail, afiequent occurrence in a large distributed system, or are cancelled. The execution task provided by our service also configures the application environment exactly as specified by the user and captures the exit code of the application, features that many grid execution services do not support due to dflculties interfacing to local scheduling systems.

  2. Development and evaluation of a Fault-Tolerant Multiprocessor (FTMP) computer. Volume 2: FTMP software

    NASA Technical Reports Server (NTRS)

    Lala, J. H.; Smith, T. B., III

    1983-01-01

    The software developed for the Fault-Tolerant Multiprocessor (FTMP) is described. The FTMP executive is a timer-interrupt driven dispatcher that schedules iterative tasks which run at 3.125, 12.5, and 25 Hz. Major tasks which run under the executive include system configuration control, flight control, and display. The flight control task includes autopilot and autoland functions for a jet transport aircraft. System Displays include status displays of all hardware elements (processors, memories, I/O ports, buses), failure log displays showing transient and hard faults, and an autopilot display. All software is in a higher order language (AED, an ALGOL derivative). The executive is a fully distributed general purpose executive which automatically balances the load among available processor triads. Provisions for graceful performance degradation under processing overload are an integral part of the scheduling algorithms.

  3. Creating and Testing Simulation Software

    NASA Technical Reports Server (NTRS)

    Heinich, Christina M.

    2013-01-01

    The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.

  4. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  5. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  6. LIBS data analysis using a predictor-corrector based digital signal processor algorithm

    NASA Astrophysics Data System (ADS)

    Sanders, Alex; Griffin, Steven T.; Robinson, Aaron

    2012-06-01

    There are many accepted sensor technologies for generating spectra for material classification. Once the spectra are generated, communication bandwidth limitations favor local material classification with its attendant reduction in data transfer rates and power consumption. Transferring sensor technologies such as Cavity Ring-Down Spectroscopy (CRDS) and Laser Induced Breakdown Spectroscopy (LIBS) require effective material classifiers. A result of recent efforts has been emphasis on Partial Least Squares - Discriminant Analysis (PLS-DA) and Principle Component Analysis (PCA). Implementation of these via general purpose computers is difficult in small portable sensor configurations. This paper addresses the creation of a low mass, low power, robust hardware spectra classifier for a limited set of predetermined materials in an atmospheric matrix. Crucial to this is the incorporation of PCA or PLS-DA classifiers into a predictor-corrector style implementation. The system configuration guarantees rapid convergence. Software running on multi-core Digital Signal Processor (DSPs) simulates a stream-lined plasma physics model estimator, reducing Analog-to-Digital (ADC) power requirements. This paper presents the results of a predictorcorrector model implemented on a low power multi-core DSP to perform substance classification. This configuration emphasizes the hardware system and software design via a predictor corrector model that simultaneously decreases the sample rate while performing the classification.

  7. Spaceport Processing System Development Lab

    NASA Technical Reports Server (NTRS)

    Dorsey, Michael

    2013-01-01

    The Spaceport Processing System Development Lab (SPSDL), developed and maintained by the Systems Hardware and Engineering Branch (NE-C4), is a development lab with its own private/restricted networks. A private/restricted network is a network with restricted or no communication with other networks. This allows users from different groups to work on their own projects in their own configured environment without interfering with others utilizing their resources in the lab. The different networks being used in the lab have no way to talk with each other due to the way they are configured, so how a user configures his software, operating system, or the equipment doesn't interfere or carry over on any of the other networks in the lab. The SPSDL is available for any project in KSC that is in need of a lab environment. My job in the SPSDL was to assist in maintaining the lab to make sure it's accessible for users. This includes, but is not limited to, making sure the computers in the lab are properly running and patched with updated hardware/software. In addition to this, I also was to assist users who had issues in utilizing the resources in the lab, which may include helping to configure a restricted network for their own environment. All of this was to ensure workers were able to use the SPSDL to work on their projects without difficulty which would in turn, benefit the work done throughout KSC. When I wasn't working in the SPSDL, I would instead help other coworkers with smaller tasks which included, but wasn't limited to, the proper disposal, moving of, or search for essential equipment. I also, during the free time I had, used NASA's resources to increase my knowledge and skills in a variety of subjects related to my major as a computer engineer, particularly in UNIX, Networking, and Embedded Systems.

  8. FPGA-based voltage and current dual drive system for high frame rate electrical impedance tomography.

    PubMed

    Khan, Shadab; Manwaring, Preston; Borsic, Andrea; Halter, Ryan

    2015-04-01

    Electrical impedance tomography (EIT) is used to image the electrical property distribution of a tissue under test. An EIT system comprises complex hardware and software modules, which are typically designed for a specific application. Upgrading these modules is a time-consuming process, and requires rigorous testing to ensure proper functioning of new modules with the existing ones. To this end, we developed a modular and reconfigurable data acquisition (DAQ) system using National Instruments' (NI) hardware and software modules, which offer inherent compatibility over generations of hardware and software revisions. The system can be configured to use up to 32-channels. This EIT system can be used to interchangeably apply current or voltage signal, and measure the tissue response in a semi-parallel fashion. A novel signal averaging algorithm, and 512-point fast Fourier transform (FFT) computation block was implemented on the FPGA. FFT output bins were classified as signal or noise. Signal bins constitute a tissue's response to a pure or mixed tone signal. Signal bins' data can be used for traditional applications, as well as synchronous frequency-difference imaging. Noise bins were used to compute noise power on the FPGA. Noise power represents a metric of signal quality, and can be used to ensure proper tissue-electrode contact. Allocation of these computationally expensive tasks to the FPGA reduced the required bandwidth between PC, and the FPGA for high frame rate EIT. In 16-channel configuration, with a signal-averaging factor of 8, the DAQ frame rate at 100 kHz exceeded 110 frames s (-1), and signal-to-noise ratio exceeded 90 dB across the spectrum. Reciprocity error was found to be for frequencies up to 1 MHz. Static imaging experiments were performed on a high-conductivity inclusion placed in a saline filled tank; the inclusion was clearly localized in the reconstructions obtained for both absolute current and voltage mode data.

  9. MacDoctor: The Macintosh diagnoser

    NASA Technical Reports Server (NTRS)

    Lavery, David B.; Brooks, William D.

    1990-01-01

    When the Macintosh computer was first released, the primary user was a computer hobbyist who typically had a significant technical background and was highly motivated to understand the internal structure and operational intricacies of the computer. In recent years the Macintosh computer has become a widely-accepted general purpose computer which is being used by an ever-increasing non-technical audience. This has lead to a large base of users which has neither the interest nor the background to understand what is happening 'behind the scenes' when the Macintosh is put to use - or what should be happening when something goes wrong. Additionally, the Macintosh itself has evolved from a simple closed design to a complete family of processor platforms and peripherals with a tremendous number of possible configurations. With the increasing popularity of the Macintosh series, software and hardware developers are producing a product for every user's need. As the complexity of configuration possibilities grows, the need for experienced or even expert knowledge is required to diagnose problems. This presents a problem to uneducated or casual users. This problem indicates a new Macintosh consumer need; that is, a diagnostic tool able to determine the problem for the user. As the volume of Macintosh products has increased, this need has also increased.

  10. A cloud-based workflow to quantify transcript-expression levels in public cancer compendia

    PubMed Central

    Tatlow, PJ; Piccolo, Stephen R.

    2016-01-01

    Public compendia of sequencing data are now measured in petabytes. Accordingly, it is infeasible for researchers to transfer these data to local computers. Recently, the National Cancer Institute began exploring opportunities to work with molecular data in cloud-computing environments. With this approach, it becomes possible for scientists to take their tools to the data and thereby avoid large data transfers. It also becomes feasible to scale computing resources to the needs of a given analysis. We quantified transcript-expression levels for 12,307 RNA-Sequencing samples from the Cancer Cell Line Encyclopedia and The Cancer Genome Atlas. We used two cloud-based configurations and examined the performance and cost profiles of each configuration. Using preemptible virtual machines, we processed the samples for as little as $0.09 (USD) per sample. As the samples were processed, we collected performance metrics, which helped us track the duration of each processing step and quantified computational resources used at different stages of sample processing. Although the computational demands of reference alignment and expression quantification have decreased considerably, there remains a critical need for researchers to optimize preprocessing steps. We have stored the software, scripts, and processed data in a publicly accessible repository (https://osf.io/gqrz9). PMID:27982081

  11. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  12. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  13. ATLAS@AWS

    NASA Astrophysics Data System (ADS)

    Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan

    2010-04-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  14. TASK ALLOCATION IN GEO-DISTRIBUTED CYBER-PHYSICAL SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aggarwal, Rachit; Smidts, Carol

    This paper studies the task allocation algorithm for a distributed test facility (DTF), which aims to assemble geo-distributed cyber (software) and physical (hardware in the loop components into a prototype cyber-physical system (CPS). This allows low cost testing on an early conceptual prototype (ECP) of the ultimate CPS (UCPS) to be developed. The DTF provides an instrumentation interface for carrying out reliability experiments remotely such as fault propagation analysis and in-situ testing of hardware and software components in a simulated environment. Unfortunately, the geo-distribution introduces an overhead that is not inherent to the UCPS, i.e. a significant time delay inmore » communication that threatens the stability of the ECP and is not an appropriate representation of the behavior of the UCPS. This can be mitigated by implementing a task allocation algorithm to find a suitable configuration and assign the software components to appropriate computational locations, dynamically. This would allow the ECP to operate more efficiently with less probability of being unstable due to the delays introduced by geo-distribution. The task allocation algorithm proposed in this work uses a Monte Carlo approach along with Dynamic Programming to identify the optimal network configuration to keep the time delays to a minimum.« less

  15. Virtual network computing: cross-platform remote display and collaboration software.

    PubMed

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  16. Medical Signal-Conditioning and Data-Interface System

    NASA Technical Reports Server (NTRS)

    Braun, Jeffrey; Jacobus, charles; Booth, Scott; Suarez, Michael; Smith, Derek; Hartnagle, Jeffrey; LePrell, Glenn

    2006-01-01

    A general-purpose portable, wearable electronic signal-conditioning and data-interface system is being developed for medical applications. The system can acquire multiple physiological signals (e.g., electrocardiographic, electroencephalographic, and electromyographic signals) from sensors on the wearer s body, digitize those signals that are received in analog form, preprocess the resulting data, and transmit the data to one or more remote location(s) via a radiocommunication link and/or the Internet. The system includes a computer running data-object-oriented software that can be programmed to configure the system to accept almost any analog or digital input signals from medical devices. The computing hardware and software implement a general-purpose data-routing-and-encapsulation architecture that supports tagging of input data and routing the data in a standardized way through the Internet and other modern packet-switching networks to one or more computer(s) for review by physicians. The architecture supports multiple-site buffering of data for redundancy and reliability, and supports both real-time and slower-than-real-time collection, routing, and viewing of signal data. Routing and viewing stations support insertion of automated analysis routines to aid in encoding, analysis, viewing, and diagnosis.

  17. Framework for architecture-independent run-time reconfigurable applications

    NASA Astrophysics Data System (ADS)

    Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.

    2000-10-01

    Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.

  18. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  19. Software for Collaborative Engineering of Launch Rockets

    NASA Technical Reports Server (NTRS)

    Stanley, Thomas Troy

    2003-01-01

    The Rocket Evaluation and Cost Integration for Propulsion and Engineering software enables collaborative computing with automated exchange of information in the design and analysis of launch rockets and other complex systems. RECIPE can interact with and incorporate a variety of programs, including legacy codes, that model aspects of a system from the perspectives of different technological disciplines (e.g., aerodynamics, structures, propulsion, trajectory, aeroheating, controls, and operations) and that are used by different engineers on different computers running different operating systems. RECIPE consists mainly of (1) ISCRM a file-transfer subprogram that makes it possible for legacy codes executed in their original operating systems on their original computers to exchange data and (2) CONES an easy-to-use filewrapper subprogram that enables the integration of legacy codes. RECIPE provides a tightly integrated conceptual framework that emphasizes connectivity among the programs used by the collaborators, linking these programs in a manner that provides some configuration control while facilitating collaborative engineering tradeoff studies, including design to cost studies. In comparison with prior collaborative-engineering schemes, one based on the use of RECIPE enables fewer engineers to do more in less time.

  20. Proof of Concept Study of Trade Space Configuration Tool for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Glidden, Geoffrey L.

    2009-01-01

    Spacecraft design is a very difficult and time consuming process because requirements and criteria are often changed or modified as the design is refined. Accounting for these adjustments in the design constraints plays a significant role in furthering the overall progress. There are numerous aspects and variables that hold significant influence on various characteristics of the design. This can be especially frustrating when attempting to conduct rapid trade space analysis on system configurations. Currently, the data and designs considered for trade space evaluations can only be displayed by using the traditional interfaces of Excel spreadsheets or CAD (Computer Aided Design) models. While helpful, these methods of analyzing the data from a systems engineering approach can be rather complicated and overwhelming. As a result, a proof of concept was conducted on a dynamic data visualization software called Thinkmap SDK (Software Developer Kit) to allow for better organization and understanding of the relationships between the various aspects that make up an entire design. The Orion Crew Module Aft Bay Subsystem was used as the test case for this study because the design and layout of many of the subsystem components will be significant in ensuring the overall center of gravity of the capsule is correct. A simplified model of this subsystem was created and programmed using Thinkmap SDK to create a preliminary prototype application of a Trade Space Configuration Tool. The completed application ensures that the core requirements for the Tool can be met. Further development is strongly suggested to produce a full prototype application to allow final evaluations and recommendations of the software capabilities.

  1. Architecture and Implementation of OpenPET Firmware and Embedded Software

    DOE PAGES

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; ...

    2016-01-11

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the versatility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics-a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures andmore » implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration.« less

  2. Metronome LKM: An open source virtual keyboard driver to measure experiment software latencies.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A

    2017-10-01

    Experiment software is often used to measure reaction times gathered with keyboards or other input devices. In previous studies, the accuracy and precision of time stamps has been assessed through several means: (a) generating accurate square wave signals from an external device connected to the parallel port of the computer running the experiment software, (b) triggering the typematic repeat feature of some keyboards to get an evenly separated series of keypress events, or (c) using a solenoid handled by a microcontroller to press the input device (keyboard, mouse button, touch screen) that will be used in the experimental setup. Despite the advantages of these approaches in some contexts, none of them can isolate the measurement error caused by the experiment software itself. Metronome LKM provides a virtual keyboard to assess an experiment's software. Using this open source driver, researchers can generate keypress events using high-resolution timers and compare the time stamps collected by the experiment software with those gathered by Metronome LKM (with nanosecond resolution). Our software is highly configurable (in terms of keys pressed, intervals, SysRq activation) and runs on 2.6-4.8 Linux kernels.

  3. ACTS: from ATLAS software towards a common track reconstruction software

    NASA Astrophysics Data System (ADS)

    Gumpert, C.; Salzburger, A.; Kiehn, M.; Hrdinka, J.; Calace, N.; ATLAS Collaboration

    2017-10-01

    Reconstruction of charged particles’ trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.

  4. Spectral Graph Theory Analysis of Software-Defined Networks to Improve Performance and Security

    DTIC Science & Technology

    2015-09-01

    listed with its associated IP address. 3. Hardware Components The hardware in the test bed included HP switches and Raspberry Pis . Two types of...discernible difference between the two types. The hosts in the network are Raspberry Pis [58], which are small, inexpensive computers with 10/100... Pis ran one of four operating systems: Raspbian, ArchLinux, Kali, 85 and Windows 10. All of the Raspberry Pis were configured with Iperf [59

  5. Telemetry Data Collection from Oscar Satellite

    NASA Technical Reports Server (NTRS)

    Haddock, Paul C.; Horan, Stephen

    1998-01-01

    This paper discusses the design, configuration, and operation of a satellite station built for the Center for Space Telemetering and Telecommunications Laboratory in the Klipsch School of Electrical and Computer Engineering Engineering at New Mexico State University (NMSU). This satellite station consists of a computer-controlled antenna tracking system, 2m/70cm transceiver, satellite tracking software, and a demodulator. The satellite station receives satellite,telemetry, allows for voice communications, and will be used in future classes. Currently this satellite station is receiving telemetry from an amateur radio satellite, UoSAT-OSCAR-11. Amateur radio satellites are referred to as Orbiting Satellites Carrying Amateur Radio (OSCAR) satellites as discussed in the next section.

  6. Holliday Triangle Hunter (HolT Hunter): Efficient Software for Identifying Low Strain DNA Triangular Configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherman, W.B.

    2012-04-16

    Synthetic DNA nanostructures are typically held together primarily by Holliday junctions. One of the most basic types of structures possible to assemble with only DNA and Holliday junctions is the triangle. To date, however, only equilateral triangles have been assembled in this manner - primarily because it is difficult to figure out what configurations of Holliday triangles have low strain. Early attempts at identifying such configurations relied upon calculations that followed the strained helical paths of DNA. Those methods, however, were computationally expensive, and failed to find many of the possible solutions. I have developed a new approach to identifyingmore » Holliday triangles that is computationally faster, and finds well over 95% of the possible solutions. The new approach is based on splitting the problem into two parts. The first part involves figuring out all the different ways that three featureless rods of the appropriate length and diameter can weave over and under one another to form a triangle. The second part of the computation entails seeing whether double helical DNA backbones can fit into the shape dictated by the rods in such a manner that the strands can cross over from one domain to the other at the appropriate spots. Structures with low strain (that is, good fit between the rods and the helices) on all three edges are recorded as promising for assembly.« less

  7. Digital data processing system dynamic loading analysis

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Tucker, A. E.

    1976-01-01

    Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.

  8. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  9. Hydrodynamical analysis of the effect of fish fins morphology

    NASA Astrophysics Data System (ADS)

    Azwadi Che Sidik, Nor; Yen, Tey Wah

    2013-12-01

    The previous works on the biomechanics of fishes focuses on the locomotion effect of the fish bodies. However, there is quite a insufficiency in unveiling the respective function of fins when the fishes pose statics and exposed to fluid flow. Accordingly, this paper's focus is to investigate the hydrodynamic effect of the fins configuration to the fluid flow of shark-shaped-inspired structure. The drag and lift coefficient is computed for different cases of fish fins addition and configuration. The k-epsilon turbulence model is deployed using finite volume method with the aid of commercial software ANSYS CFX. The finding will demystify some of the functions of the fish's fins in term of their contribution to the hydrodynamic flow around the fishes.

  10. Using NERSC High-Performance Computing (HPC) systems for high-energy nuclear physics applications with ALICE

    NASA Astrophysics Data System (ADS)

    Fasel, Markus

    2016-10-01

    High-Performance Computing Systems are powerful tools tailored to support large- scale applications that rely on low-latency inter-process communications to run efficiently. By design, these systems often impose constraints on application workflows, such as limited external network connectivity and whole node scheduling, that make more general-purpose computing tasks, such as those commonly found in high-energy nuclear physics applications, more difficult to carry out. In this work, we present a tool designed to simplify access to such complicated environments by handling the common tasks of job submission, software management, and local data management, in a framework that is easily adaptable to the specific requirements of various computing systems. The tool, initially constructed to process stand-alone ALICE simulations for detector and software development, was successfully deployed on the NERSC computing systems, Carver, Hopper and Edison, and is being configured to provide access to the next generation NERSC system, Cori. In this report, we describe the tool and discuss our experience running ALICE applications on NERSC HPC systems. The discussion will include our initial benchmarks of Cori compared to other systems and our attempts to leverage the new capabilities offered with Cori to support data-intensive applications, with a future goal of full integration of such systems into ALICE grid operations.

  11. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman, Carol S.; Benzinger, Leonora; Beshers, George; Hammerslag, David; Kimball, John; Kirslis, Peter A.; Render, Hal; Richards, Paul; Terwilliger, Robert

    1985-01-01

    The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. The SAGA system consists of a small number of software components that are adapted by the meta-tools into specific tools for use in the software development application. The modules are design so that the meta-tools can construct an environment which is both integrated and flexible. The SAGA project is documented in several papers which are presented.

  12. Robotics On-Board Trainer (ROBoT)

    NASA Technical Reports Server (NTRS)

    Johnson, Genevieve; Alexander, Greg

    2013-01-01

    ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.

  13. An Inviscid Computational Study of an X-33 Configuration at Hypersonic Speeds

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    1999-01-01

    This report documents the results of a study conducted to compute the inviscid longitudinal aerodynamic characteristics of a simplified X-33 configuration. The major components of the X-33 vehicle, namely the body, the canted fin, the vertical fin, and the body-flap, were simulated in the CFD (Computational Fluid Dynamic) model. The rear-ward facing surfaces at the base including the aerospike engine surfaces were not simulated. The FELISA software package consisting of an unstructured surface and volume grid generator and two inviscid flow solvers was used for this study. Computations were made for Mach 4.96, 6.0, and 10.0 with perfect gas air option, and for Mach 10 with equilibrium air option with flow condition of a typical point on the X-33 flight trajectory. Computations were also made with CF4 gas option at Mach 6.0 to simulate the CF4 tunnel flow condition. An angle of attack range of 12 to 48 deg was covered. The CFD results were compared with available wind tunnel data. Comparison was good at low angles of attack; at higher angles of attack (beyond 25 deg) some differences were found in the pitching moment. These differences progressively increased with increase in angle of attack, and are attributed to the viscous effects. However, the computed results showed the trends exhibited by the wind tunnel data.

  14. JINR cloud infrastructure evolution

    NASA Astrophysics Data System (ADS)

    Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.

    2016-09-01

    To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.

  15. Simulation of Blood flow in Different Configurations Design of Bi-leaflet Mechanical Heart Valve

    NASA Astrophysics Data System (ADS)

    Hafizah Mokhtar, N.; Abas, Aizat

    2018-05-01

    In this work, two different designs of artificial heart valve were devised and then compared by considering the thrombosis, wear and valve orifice to anatomical orifice ratio of each mechanical heart valve. These different design configurations of bi-leaflet mechanical heart valves model are created through the use of Computer-aided design (CAD) modelling and simulated using Computational fluid dynamic (CFD) software. Design 1 is based on existing conventional bi-leaflet valve and design 2 based on modified bi-leaflet respectively. The flow pattern, velocity, vorticity and stress analysis have been done to justify the best design. Based on results, both of the designs show a Doppler velocity index of less than the allowable standard of 2.2 which is safe to be used as replacement of the human heart valve. However, design 2 shows that it has a lower possibility of cavitation issue which will lead to lower thrombosis and provide good central flow area of blood as compared to design 1.

  16. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    NASA Astrophysics Data System (ADS)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.

  17. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    PubMed Central

    Vergara-Perez, Sandra; Marucho, Marcelo

    2015-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules. PMID:26924848

  18. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.

    PubMed

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.

  19. Upgrades and Modifications of the NASA Ames HFFAF Ballistic Range

    NASA Technical Reports Server (NTRS)

    Bogdanoff, David W.; Wilder, Michael C.; Cornelison, Charles J.; Perez, Alfredo J.

    2017-01-01

    The NASA Ames Hypervelocity Free Flight Aerodynamics Facility ballistic range is described. The various configurations of the shadowgraph stations are presented. This includes the original stations with film and configurations with two different types of digital cameras. Resolution tests for the 3 shadowgraph station configurations are described. The advantages of the digital cameras are discussed, including the immediate availability of the shadowgraphs. The final shadowgraph station configuration is a mix of 26 Nikon cameras and 6 PI-MAX2 cameras. Two types of trigger light sheet stations are described visible and IR. The two gunpowders used for the NASA Ames 6.251.50 light gas guns are presented. These are the Hercules HC-33-FS powder (no longer available) and the St. Marks Powder WC 886 powder. The results from eight proof shots for the two powders are presented. Both muzzle velocities and piston velocities are 5 9 lower for the new St. Marks WC 886 powder than for the old Hercules HC-33-FS powder (no longer available). The experimental and CFD (computational) piston and muzzle velocities are in good agreement. Shadowgraph-reading software that employs template-matching pattern recognition to locate the ballistic-range model is described. Templates are generated from a 3D solid model of the ballistic-range model. The accuracy of the approach is assessed using a set of computer-generated test images.

  20. GSC configuration management plan

    NASA Technical Reports Server (NTRS)

    Withers, B. Edward

    1990-01-01

    The tools and methods used for the configuration management of the artifacts (including software and documentation) associated with the Guidance and Control Software (GCS) project are described. The GCS project is part of a software error studies research program. Three implementations of GCS are being produced in order to study the fundamental characteristics of the software failure process. The Code Management System (CMS) is used to track and retrieve versions of the documentation and software. Application of the CMS for this project is described and the numbering scheme is delineated for the versions of the project artifacts.

  1. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology

    PubMed Central

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen

    2013-01-01

    Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415

  2. A computer controlled television detector for light, X-rays and particles

    NASA Technical Reports Server (NTRS)

    Kalata, K.

    1981-01-01

    A versatile, high resolution, software configurable, two-dimensional intensified vidicon quantum detector system has been developed for multiple research applications. A thin phosphor convertor allows the detection of X-rays below 20 keV and non-relativistic particles in addition to visible light, and a thicker scintillator can be used to detect X-rays up to 100 keV and relativistic particles. Faceplates may be changed to allow any active area from 1 to 40 mm square, and active areas up to 200 mm square are possible. The image is integrated in a digital memory on any software specified array size up to 4000 x 4000. The array size is selected to match the spatial resolution, which ranges from 10 to 100 microns depending on the operating mode, the active area, and the photon or particle energy. All scan and data acquisition parameters are under software control to allow optimal data collection for each application.

  3. Design document for the MOODS Data Management System (MDMS), version 1.0

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The MOODS Data Management System (MDMS) provides access to the Master Oceanographic Observation Data Set (MOODS) which is maintained by the Naval Oceanographic Office (NAVOCEANO). The MDMS incorporates database technology in providing seamless access to parameter (temperature, salinity, soundspeed) vs. depth observational profile data. The MDMS is an interactive software application with a graphical user interface (GUI) that supports user control of MDMS functional capabilities. The purpose of this document is to define and describe the structural framework and logical design of the software components/units which are integrated into the major computer software configuration item (CSCI) identified as MDMS, Version 1.0. The preliminary design is based on functional specifications and requirements identified in the governing Statement of Work prepared by the Naval Oceanographic Office (NAVOCEANO) and distributed as a request for proposal by the National Aeronautics and Space Administration (NASA).

  4. Space Station Mission Planning Study (MPS) development study. Volume 3: Software development plan

    NASA Technical Reports Server (NTRS)

    Klus, W. L.

    1987-01-01

    A software development plan is presented for the definition, design, and implementation of the Space Station (SS) Payload Mission Planning System (MPS). This plan is an evolving document and must be updated periodically as the SS design and operations concepts as well as the SS MPS concept evolve. The major segments of this plan are as follows: an overview of the SS MPS and a description of its required capabilities including the computer programs identified as configurable items with an explanation of the place and function of each within the system; an overview of the project plan and a detailed description of each development project activity breaking each into lower level tasks where applicable; identification of the resources required and recommendations for the manner in which they should be utilized including recommended schedules and estimated manpower requirements; and a description of the practices, standards, and techniques recommended for the SS MPS Software (SW) development.

  5. Design document for the Surface Currents Data Base (SCDB) Management System (SCDBMS), version 1.0

    NASA Technical Reports Server (NTRS)

    Krisnnamagaru, Ramesh; Cesario, Cheryl; Foster, M. S.; Das, Vishnumohan

    1994-01-01

    The Surface Currents Database Management System (SCDBMS) provides access to the Surface Currents Data Base (SCDB) which is maintained by the Naval Oceanographic Office (NAVOCEANO). The SCDBMS incorporates database technology in providing seamless access to surface current data. The SCDBMS is an interactive software application with a graphical user interface (GUI) that supports user control of SCDBMS functional capabilities. The purpose of this document is to define and describe the structural framework and logistical design of the software components/units which are integrated into the major computer software configuration item (CSCI) identified as the SCDBMS, Version 1.0. The preliminary design is based on functional specifications and requirements identified in the governing Statement of Work prepared by the Naval Oceanographic Office (NAVOCEANO) and distributed as a request for proposal by the National Aeronautics and Space Administration (NASA).

  6. Portable Computer Technology (PCT) Research and Development Program Phase 2

    NASA Technical Reports Server (NTRS)

    Castillo, Michael; McGuire, Kenyon; Sorgi, Alan

    1995-01-01

    The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.

  7. EON: software for long time simulations of atomic scale systems

    NASA Astrophysics Data System (ADS)

    Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme

    2014-07-01

    The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.

  8. FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.

    2010-01-01

    This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.

  9. Pynamic: the Python Dynamic Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, G L; Ahn, D H; de Supinksi, B R

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, wemore » present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.« less

  10. Computer-aided modelling and analysis of PV systems: a comparative study.

    PubMed

    Koukouvaos, Charalambos; Kandris, Dionisis; Samarakou, Maria

    2014-01-01

    Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems.

  11. VIEW-Station software and its graphical user interface

    NASA Astrophysics Data System (ADS)

    Kawai, Tomoaki; Okazaki, Hiroshi; Tanaka, Koichiro; Tamura, Hideyuki

    1992-04-01

    VIEW-Station is a workstation-based image processing system which merges the state-of-the- art software environment of Unix with the computing power of a fast image processor. VIEW- Station has a hierarchical software architecture, which facilitates device independence when porting across various hardware configurations, and provides extensibility in the development of application systems. The core image computing language is V-Sugar. V-Sugar provides a set of image-processing datatypes and allows image processing algorithms to be simply expressed, using a functional notation. VIEW-Station provides a hardware independent window system extension called VIEW-Windows. In terms of GUI (Graphical User Interface) VIEW-Station has two notable aspects. One is to provide various types of GUI as visual environments for image processing execution. Three types of interpreters called (mu) V- Sugar, VS-Shell and VPL are provided. Users may choose whichever they prefer based on their experience and tasks. The other notable aspect is to provide facilities to create GUI for new applications on the VIEW-Station system. A set of widgets are available for construction of task-oriented GUI. A GUI builder called VIEW-Kid is developed for WYSIWYG interactive interface design.

  12. Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study

    PubMed Central

    Koukouvaos, Charalambos

    2014-01-01

    Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems. PMID:24772007

  13. A COMPUTER MODELING STUDY OF BINDING PROPERTIES OF CHIRAL NUCLEOPEPTIDE FOR BIOMEDICAL APPLICATIONS.

    PubMed

    Pirtskhalava, M; Egoyan, A; Mirtskhulava, M; Roviello, G

    2017-12-01

    Nucleopeptides often show interesting properties of molecular binding that render them good candidates for development of innovative drugs for anticancer and antiviral therapies. In this work we present results of computer modeling of interactions between the molecules of hexathymine nucleopeptide (T6) and poly rA RNA (A18). The results of geometry optimization calculated using Hyperchem software and our own computer program for molecular docking show that molecules establish stable complexes due to the complementary-nucleobase interaction and the electrostatic interaction between the negative phosphate group of poly rA and the positively-charged residues present in the cationic nucleopeptide structure. Computer modeling makes it possible to find the optimal binding configuration of the molecules of a nucleopeptide and poly rA RNA and to estimate the binding energy between the molecules.

  14. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.

    PubMed

    Jung, Sang-Kyu; McDonald, Karen

    2011-08-16

    Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.

  15. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization

    PubMed Central

    2011-01-01

    Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353

  16. Precision Attitude Determination System (PADS) design and analysis. Two-axis gimbal star tracker

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Development of the Precision Attitude Determination System (PADS) focused chiefly on the two-axis gimballed star tracker and electronics design improved from that of Precision Pointing Control System (PPCS), and application of the improved tracker for PADS at geosynchronous altitude. System design, system analysis, software design, and hardware design activities are reported. The system design encompasses the PADS configuration, system performance characteristics, component design summaries, and interface considerations. The PADS design and performance analysis includes error analysis, performance analysis via attitude determination simulation, and star tracker servo design analysis. The design of the star tracker and electronics are discussed. Sensor electronics schematics are included. A detailed characterization of the application software algorithms and computer requirements is provided.

  17. Numerical aerodynamic simulation facility feasibility study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    There were three major issues examined in the feasibility study. First, the ability of the proposed system architecture to support the anticipated workload was evaluated. Second, the throughput of the computational engine (the flow model processor) was studied using real application programs. Third, the availability reliability, and maintainability of the system were modeled. The evaluations were based on the baseline systems. The results show that the implementation of the Numerical Aerodynamic Simulation Facility, in the form considered, would indeed be a feasible project with an acceptable level of risk. The technology required (both hardware and software) either already exists or, in the case of a few parts, is expected to be announced this year. Facets of the work described include the hardware configuration, software, user language, and fault tolerance.

  18. Smartphone Microscopy of Parasite Eggs Accumulated into a Single Field of View

    PubMed Central

    Sowerby, Stephen J.; Crump, John A.; Johnstone, Maree C.; Krause, Kurt L.; Hill, Philip C.

    2016-01-01

    A Nokia Lumia 1020 cellular phone (Microsoft Corp., Auckland, New Zealand) was configured to image the ova of Ascaris lumbricoides converged into a single field of view but on different focal planes. The phone was programmed to acquire images at different distances and, using public domain computer software, composite images were created that brought all the eggs into sharp focus. This proof of concept informs a framework for field-deployable, point of care monitoring of soil-transmitted helminths. PMID:26572870

  19. A 3D TCAD simulation of a thermoelectric module configured for thermoelectric power generation, cooling and heating

    NASA Astrophysics Data System (ADS)

    Gould, C. A.; Shammas, N. Y. A.; Grainger, S.; Taylor, I.; Simpson, K.

    2012-06-01

    This paper documents the 3D modeling and simulation of a three couple thermoelectric module using the Synopsys Technology Computer Aided Design (TCAD) semiconductor simulation software. Simulation results are presented for thermoelectric power generation, cooling and heating, and successfully demonstrate the basic thermoelectric principles. The 3D TCAD simulation model of a three couple thermoelectric module can be used in the future to evaluate different thermoelectric materials, device structures, and improve the efficiency and performance of thermoelectric modules.

  20. A Plan for Collecting Ada Software Development Cost, Schedule, and Environment Data.

    DTIC Science & Technology

    1987-04-02

    separate comment section for events that affected individual CSCIs. I C-31 S% S g S. SYSTEM LEVEL OR CSCI LEVEL DOCUMENTATION PORN INSTRUCTIONS This...J*~ *.p.N. COMPUTER SOFTVARE CONFIGURATION ITEM SUMMART DATA PORN INSTRUCTIONS 4.1.4. Development Methods Experience Enter the average number of...users see what is happening and view results). It is often a video display terminal of some sort, although it could be a hard copy printer and keyboard or

  1. Java Tool Framework for Automation of Hardware Commissioning and Maintenance Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, J C; Fisher, J M; Gordon, J B

    2007-10-02

    The National Ignition Facility (NIF) is a 192-beam laser system designed to study high energy density physics. Each beam line contains a variety of line replaceable units (LRUs) that contain optics, stepping motors, sensors and other devices to control and diagnose the laser. During commissioning and subsequent maintenance of the laser, LRUs undergo a qualification process using the Integrated Computer Control System (ICCS) to verify and calibrate the equipment. The commissioning processes are both repetitive and tedious when we use remote manual computer controls, making them ideal candidates for software automation. Maintenance and Commissioning Tool (MCT) software was developed tomore » improve the efficiency of the qualification process. The tools are implemented in Java, leveraging ICCS services and CORBA to communicate with the control devices. The framework provides easy-to-use mechanisms for handling configuration data, task execution, task progress reporting, and generation of commissioning test reports. The tool framework design and application examples will be discussed.« less

  2. Lunar Applications in Reconfigurable Computing

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin

    2008-01-01

    NASA s Constellation Program is developing a lunar surface outpost in which reconfigurable computing will play a significant role. Reconfigurable systems provide a number of benefits over conventional software-based implementations including performance and power efficiency, while the use of standardized reconfigurable hardware provides opportunities to reduce logistical overhead. The current vision for the lunar surface architecture includes habitation, mobility, and communications systems, each of which greatly benefit from reconfigurable hardware in applications including video processing, natural feature recognition, data formatting, IP offload processing, and embedded control systems. In deploying reprogrammable hardware, considerations similar to those of software systems must be managed. There needs to be a mechanism for discovery enabling applications to locate and utilize the available resources. Also, application interfaces are needed to provide for both configuring the resources as well as transferring data between the application and the reconfigurable hardware. Each of these topics are explored in the context of deploying reconfigurable resources as an integral aspect of the lunar exploration architecture.

  3. A real-time inverse quantised transform for multi-standard with dynamic resolution support

    NASA Astrophysics Data System (ADS)

    Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce

    2016-06-01

    In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.

  4. TomoMiner and TomoMinerCloud: A software platform for large-scale subtomogram structural analysis

    PubMed Central

    Frazier, Zachary; Xu, Min; Alber, Frank

    2017-01-01

    SUMMARY Cryo-electron tomography (cryoET) captures the 3D electron density distribution of macromolecular complexes in close to native state. With the rapid advance of cryoET acquisition technologies, it is possible to generate large numbers (>100,000) of subtomograms, each containing a macromolecular complex. Often, these subtomograms represent a heterogeneous sample due to variations in structure and composition of a complex in situ form or because particles are a mixture of different complexes. In this case subtomograms must be classified. However, classification of large numbers of subtomograms is a time-intensive task and often a limiting bottleneck. This paper introduces an open source software platform, TomoMiner, for large-scale subtomogram classification, template matching, subtomogram averaging, and alignment. Its scalable and robust parallel processing allows efficient classification of tens to hundreds of thousands of subtomograms. Additionally, TomoMiner provides a pre-configured TomoMinerCloud computing service permitting users without sufficient computing resources instant access to TomoMiners high-performance features. PMID:28552576

  5. Automated software configuration in the MONSOON system

    NASA Astrophysics Data System (ADS)

    Daly, Philip N.; Buchholz, Nick C.; Moore, Peter C.

    2004-09-01

    MONSOON is the next generation OUV-IR controller project being developed at NOAO. The design is flexible, emphasizing code re-use, maintainability and scalability as key factors. The software needs to support widely divergent detector systems ranging from multi-chip mosaics (for LSST, QUOTA, ODI and NEWFIRM) down to large single or multi-detector laboratory development systems. In order for this flexibility to be effective and safe, the software must be able to configure itself to the requirements of the attached detector system at startup. The basic building block of all MONSOON systems is the PAN-DHE pair which make up a single data acquisition node. In this paper we discuss the software solutions used in the automatic PAN configuration system.

  6. Prediction of Protein Configurational Entropy (Popcoen).

    PubMed

    Goethe, Martin; Gleixner, Jan; Fita, Ignacio; Rubi, J Miguel

    2018-03-13

    A knowledge-based method for configurational entropy prediction of proteins is presented; this methodology is extremely fast, compared to previous approaches, because it does not involve any type of configurational sampling. Instead, the configurational entropy of a query fold is estimated by evaluating an artificial neural network, which was trained on molecular-dynamics simulations of ∼1000 proteins. The predicted entropy can be incorporated into a large class of protein software based on cost-function minimization/evaluation, in which configurational entropy is currently neglected for performance reasons. Software of this type is used for all major protein tasks such as structure predictions, proteins design, NMR and X-ray refinement, docking, and mutation effect predictions. Integrating the predicted entropy can yield a significant accuracy increase as we show exemplarily for native-state identification with the prominent protein software FoldX. The method has been termed Popcoen for Prediction of Protein Configurational Entropy. An implementation is freely available at http://fmc.ub.edu/popcoen/ .

  7. Software Framework for Controlling Unsupervised Scientific Instruments.

    PubMed

    Schmid, Benjamin; Jahr, Wiebke; Weber, Michael; Huisken, Jan

    2016-01-01

    Science outreach and communication are gaining more and more importance for conveying the meaning of today's research to the general public. Public exhibitions of scientific instruments can provide hands-on experience with technical advances and their applications in the life sciences. The software of such devices, however, is oftentimes not appropriate for this purpose. In this study, we describe a software framework and the necessary computer configuration that is well suited for exposing a complex self-built and software-controlled instrument such as a microscope to laymen under limited supervision, e.g. in museums or schools. We identify several aspects that must be met by such software, and we describe a design that can simultaneously be used to control either (i) a fully functional instrument in a robust and fail-safe manner, (ii) an instrument that has low-cost or only partially working hardware attached for illustration purposes or (iii) a completely virtual instrument without hardware attached. We describe how to assess the educational success of such a device, how to monitor its operation and how to facilitate its maintenance. The introduced concepts are illustrated using our software to control eduSPIM, a fluorescent light sheet microscope that we are currently exhibiting in a technical museum.

  8. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting

    PubMed Central

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579

  9. MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank Mueller

    2009-02-05

    MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less

  10. COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Essam A

    2013-01-09

    Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations tomore » study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.« less

  11. A highly versatile and easily configurable system for plant electrophysiology.

    PubMed

    Gunsé, Benet; Poschenrieder, Charlotte; Rankl, Simone; Schröeder, Peter; Rodrigo-Moreno, Ana; Barceló, Juan

    2016-01-01

    In this study we present a highly versatile and easily configurable system for measuring plant electrophysiological parameters and ionic flow rates, connected to a computer-controlled highly accurate positioning device. The modular software used allows easy customizable configurations for the measurement of electrophysiological parameters. Both the operational tests and the experiments already performed have been fully successful and rendered a low noise and highly stable signal. Assembly, programming and configuration examples are discussed. The system is a powerful technique that not only gives precise measuring of plant electrophysiological status, but also allows easy development of ad hoc configurations that are not constrained to plant studies. •We developed a highly modular system for electrophysiology measurements that can be used either in organs or cells and performs either steady or dynamic intra- and extracellular measurements that takes advantage of the easiness of visual object-oriented programming.•High precision accuracy in data acquisition under electrical noisy environments that allows it to run even in a laboratory close to electrical equipment that produce electrical noise.•The system makes an improvement of the currently used systems for monitoring and controlling high precision measurements and micromanipulation systems providing an open and customizable environment for multiple experimental needs.

  12. Interactive Analysis of General Beam Configurations using Finite Element Methods and JavaScript

    NASA Astrophysics Data System (ADS)

    Hernandez, Christopher

    Advancements in computer technology have contributed to the widespread practice of modelling and solving engineering problems through the use of specialized software. The wide use of engineering software comes with the disadvantage to the user of costs from the required purchase of software licenses. The creation of accurate, trusted, and freely available applications capable of conducting meaningful analysis of engineering problems is a way to mitigate to the costs associated with every-day engineering computations. Writing applications in the JavaScript programming language allows the applications to run within any computer browser, without the need to install specialized software, since all internet browsers are equipped with virtual machines (VM) that allow the browsers to execute JavaScript code. The objective of this work is the development of an application that performs the analysis of a completely general beam through use of the finite element method. The app is written in JavaScript and embedded in a web page so it can be downloaded and executed by a user with an internet connection. This application allows the user to analyze any uniform or non-uniform beam, with any combination of applied forces, moments, distributed loads, and boundary conditions. Outputs for this application include lists the beam deformations and slopes, as well as lateral and slope deformation graphs, bending stress distributions, and shear and a moment diagrams. To validate the methodology of the GBeam finite element app, its results are verified using the results from obtained from two other established finite element solvers for fifteen separate test cases.

  13. Software Configuration Management Plan for the B-Plant Canyon Ventilation Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCDANIEL, K.S.

    1999-08-31

    Project W-059 installed a new B Plant Canyon Ventilation System. Monitoring and control of the system is implemented by the Canyon Ventilation Control System (CVCS). This Software Configuration Management Plan provides instructions for change control of the CVCS.

  14. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  15. Some Observations on the Current Status of Performing Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.

    2015-01-01

    Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.

  16. Development of wide band digital receiver for atmospheric radars using COTS board based SDR

    NASA Astrophysics Data System (ADS)

    Yasodha, Polisetti; Jayaraman, Achuthan; Thriveni, A.

    2016-07-01

    Digital receiver extracts the received echo signal information, and is a potential subsystem for atmospheric radar, also referred to as wind profiling radar (WPR), which provides the vertical profiles of 3-dimensional wind vector in the atmosphere. This paper presents the development of digital receiver using COTS board based Software Defined Radio technique, which can be used for atmospheric radars. The developmental work is being carried out at National Atmospheric Research Laboratory (NARL), Gadanki. The digital receiver consists of a commercially available software defined radio (SDR) board called as universal software radio peripheral B210 (USRP B210) and a personal computer. USRP B210 operates over a wider frequency range from 70 MHz to 6 GHz and hence can be used for variety of radars like Doppler weather radars operating in S/C bands, in addition to wind profiling radars operating in VHF, UHF and L bands. Due to the flexibility and re-configurability of SDR, where the component functionalities are implemented in software, it is easy to modify the software to receive the echoes and process them as per the requirement suitable for the type of the radar intended. Hence, USRP B210 board along with the computer forms a versatile digital receiver from 70 MHz to 6 GHz. It has an inbuilt direct conversion transceiver with two transmit and two receive channels, which can be operated in fully coherent 2x2 MIMO fashion and thus it can be used as a two channel receiver. Multiple USRP B210 boards can be synchronized using the pulse per second (PPS) input provided on the board, to configure multi-channel digital receiver system. RF gain of the transceiver can be varied from 0 to 70 dB. The board can be controlled from the computer via USB 3.0 interface through USRP hardware driver (UHD), which is an open source cross platform driver. The USRP B210 board is connected to the personal computer through USB 3.0. Reference (10 MHz) clock signal from the radar master oscillator is used to lock the board, which is essential for deriving Doppler information. Input from the radar analog receiver is given to one channel of USRP B210, which is down converted to baseband. 12-bit ADC present on the board digitizes the signal and produces I (in-phase) and Q (quadrature-phase) data. The maximum sampling rate possible is about 61 MSPS. The I and Q (time series) data is sent to PC via USB 3.0, where the signal processing is carried out. The online processing steps include decimation, range gating, decoding, coherent integration and FFT computation (optional). The processed data is then stored in the hard disk. C++ programming language is used for developing the real time signal processing. Shared memory along with multi threading is used to collect and process data simultaneously. Before implementing the real time operation, stand alone test of the board was carried out through GNU radio software and the base band output data obtained is found satisfactory. Later the board is integrated with the existing Lower Atmospheric Wind Profiling radar at NARL. The radar receive IF output at 70 MHz is given to the board and the real-time radar data is collected. The data is processed off-line and the range-doppler spectrum is obtained. Online processing software is under progress.

  17. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU

    PubMed Central

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-01-01

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363

  18. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.

    PubMed

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-03-11

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.

  19. Aeroacoustic Simulations of a Nose Landing Gear Using FUN3D on Pointwise Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Rhoads, John; Lockard, David P.

    2015-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise(TradeMark) grid generation software are used for these simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these simulations. Solutions are also presented for a wall function model coupled to the standard turbulence model. Time-averaged and instantaneous solutions obtained on these Pointwise grids are compared with the measured data and previous numerical solutions. The resulting CFD solutions are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the farfield noise levels in the flyover and sideline directions. The computed noise levels compare well with previous CFD solutions and experimental data.

  20. 48 CFR 227.7203-15 - Subcontractor rights in computer software or computer software documentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computer software or computer software documentation. 227.7203-15 Section 227.7203-15 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-15 Subcontractor rights in computer software or computer software documentation. (a...

  1. 48 CFR 227.7203-15 - Subcontractor rights in computer software or computer software documentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computer software or computer software documentation. 227.7203-15 Section 227.7203-15 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-15 Subcontractor rights in computer software or computer software documentation. (a...

  2. 48 CFR 227.7203-15 - Subcontractor rights in computer software or computer software documentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computer software or computer software documentation. 227.7203-15 Section 227.7203-15 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-15 Subcontractor rights in computer software or computer software documentation. (a...

  3. 48 CFR 227.7203-15 - Subcontractor rights in computer software or computer software documentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computer software or computer software documentation. 227.7203-15 Section 227.7203-15 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-15 Subcontractor rights in computer software or computer software documentation. (a...

  4. 48 CFR 227.7203-15 - Subcontractor rights in computer software or computer software documentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computer software or computer software documentation. 227.7203-15 Section 227.7203-15 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-15 Subcontractor rights in computer software or computer software documentation. (a...

  5. 48 CFR 227.7202 - Commercial computer software and commercial computer software documentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... software and commercial computer software documentation. 227.7202 Section 227.7202 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202 Commercial computer software and commercial computer software documentation. ...

  6. 48 CFR 227.7203 - Noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... software and noncommercial computer software documentation. 227.7203 Section 227.7203 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203 Noncommercial computer software and noncommercial computer software documentation. ...

  7. 48 CFR 227.7203 - Noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... software and noncommercial computer software documentation. 227.7203 Section 227.7203 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203 Noncommercial computer software and noncommercial computer software documentation. ...

  8. 48 CFR 227.7203 - Noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... software and noncommercial computer software documentation. 227.7203 Section 227.7203 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203 Noncommercial computer software and noncommercial computer software documentation. ...

  9. 48 CFR 227.7202 - Commercial computer software and commercial computer software documentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... software and commercial computer software documentation. 227.7202 Section 227.7202 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202 Commercial computer software and commercial computer software documentation. ...

  10. 48 CFR 227.7203 - Noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... software and noncommercial computer software documentation. 227.7203 Section 227.7203 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203 Noncommercial computer software and noncommercial computer software documentation. ...

  11. 48 CFR 227.7202 - Commercial computer software and commercial computer software documentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... software and commercial computer software documentation. 227.7202 Section 227.7202 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202 Commercial computer software and commercial computer software documentation. ...

  12. 48 CFR 227.7202 - Commercial computer software and commercial computer software documentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software and commercial computer software documentation. 227.7202 Section 227.7202 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202 Commercial computer software and commercial computer software documentation. ...

  13. 48 CFR 227.7202 - Commercial computer software and commercial computer software documentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... software and commercial computer software documentation. 227.7202 Section 227.7202 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202 Commercial computer software and commercial computer software documentation. ...

  14. 48 CFR 227.7203 - Noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software and noncommercial computer software documentation. 227.7203 Section 227.7203 Federal Acquisition... REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203 Noncommercial computer software and noncommercial computer software documentation. ...

  15. Enabling Co-Design of Multi-Layer Exascale Storage Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carothers, Christopher

    Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less

  16. Numerical study of canister filters with alternatives filter cap configurations

    NASA Astrophysics Data System (ADS)

    Mohammed, A. N.; Daud, A. R.; Abdullah, K.; Seri, S. M.; Razali, M. A.; Hushim, M. F.; Khalid, A.

    2017-09-01

    Air filtration system and filter play an important role in getting a good quality air into turbo machinery such as gas turbine. The filtration system and filter has improved the quality of air and protect the gas turbine part from contaminants which could bring damage. During separation of contaminants from the air, pressure drop cannot be avoided but it can be minimized thus helps to reduce the intake losses of the engine [1]. This study is focused on the configuration of the filter in order to obtain the minimal pressure drop along the filter. The configuration used is the basic filter geometry provided by Salutary Avenue Manufacturing Sdn Bhd. and two modified canister filter cap which is designed based on the basic filter model. The geometries of the filter are generated by using SOLIDWORKS software and Computational Fluid Dynamics (CFD) software is used to analyse and simulates the flow through the filter. In this study, the parameters of the inlet velocity are 0.032 m/s, 0.063 m/s, 0.094 m/s and 0.126 m/s. The total pressure drop produce by basic, modified filter 1 and 2 is 292.3 Pa, 251.11 Pa and 274.7 Pa. The pressure drop reduction for the modified filter 1 is 41.19 Pa and 14.1% lower compared to basic filter and the pressure drop reduction for modified filter 2 is 17.6 Pa and 6.02% lower compared to the basic filter. The pressure drops for the basic filter are slightly different with the Salutary Avenue filter due to limited data and experiment details. CFD software are very reliable in running a simulation rather than produces the prototypes and conduct the experiment thus reducing overall time and cost in this study.

  17. 48 CFR 227.7202-3 - Rights in commercial computer software or commercial computer software documentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computer software or commercial computer software documentation. 227.7202-3 Section 227.7202-3 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202-3 Rights in commercial computer software or commercial computer software documentation...

  18. 48 CFR 227.7202-3 - Rights in commercial computer software or commercial computer software documentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computer software or commercial computer software documentation. 227.7202-3 Section 227.7202-3 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202-3 Rights in commercial computer software or commercial computer software documentation...

  19. 48 CFR 227.7203-2 - Acquisition of noncommercial computer software and computer software documentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... noncommercial computer software and computer software documentation. 227.7203-2 Section 227.7203-2 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-2 Acquisition of noncommercial computer software and computer software documentation. (a...

  20. 48 CFR 227.7203-2 - Acquisition of noncommercial computer software and computer software documentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... noncommercial computer software and computer software documentation. 227.7203-2 Section 227.7203-2 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-2 Acquisition of noncommercial computer software and computer software documentation. (a...

  1. 48 CFR 227.7203-2 - Acquisition of noncommercial computer software and computer software documentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... noncommercial computer software and computer software documentation. 227.7203-2 Section 227.7203-2 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-2 Acquisition of noncommercial computer software and computer software documentation. (a...

  2. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  3. 48 CFR 227.7203-2 - Acquisition of noncommercial computer software and computer software documentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... noncommercial computer software and computer software documentation. 227.7203-2 Section 227.7203-2 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-2 Acquisition of noncommercial computer software and computer software documentation. (a...

  4. 48 CFR 227.7202-3 - Rights in commercial computer software or commercial computer software documentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computer software or commercial computer software documentation. 227.7202-3 Section 227.7202-3 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202-3 Rights in commercial computer software or commercial computer software documentation...

  5. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  6. 48 CFR 227.7202-3 - Rights in commercial computer software or commercial computer software documentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computer software or commercial computer software documentation. 227.7202-3 Section 227.7202-3 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202-3 Rights in commercial computer software or commercial computer software documentation...

  7. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  8. 48 CFR 227.7202-3 - Rights in commercial computer software or commercial computer software documentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computer software or commercial computer software documentation. 227.7202-3 Section 227.7202-3 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7202-3 Rights in commercial computer software or commercial computer software documentation...

  9. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  10. 48 CFR 227.7203-10 - Contractor identification and marking of computer software or computer software documentation to...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and marking of computer software or computer software documentation to be furnished with restrictive... Rights in Computer Software and Computer Software Documentation 227.7203-10 Contractor identification and marking of computer software or computer software documentation to be furnished with restrictive markings...

  11. 48 CFR 227.7203-2 - Acquisition of noncommercial computer software and computer software documentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... noncommercial computer software and computer software documentation. 227.7203-2 Section 227.7203-2 Federal... CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-2 Acquisition of noncommercial computer software and computer software documentation. (a...

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jared M; Ferber, Aaron E; Greenlee, Elliot D

    Akatosh is a highly configurable system based on the integration of the capabilities of one or more Intrusion Detection Systems (IDS) and automated forensic analysis. Akatosh reduces the false positive rates of IDSs and alleviates costs of incident response by pointing forensic personnel to the root cause of an incident on affected endpoint devices. Akatosh is able to analyze a computer system in near real-time and provide operations and forensic analyst personnel with continuous feedback on the impact of malware and software on deployed systems. Additionally, Akatosh provides the ability to look back into any prior state in the historymore » of the computer system along with the ability to compare one or more prior system states with any other prior state.« less

  13. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  14. Software platform for rapid prototyping of NIRS brain computer interfacing techniques.

    PubMed

    Matthews, Fiachra; Soraghan, Christopher; Ward, Tomas E; Markham, Charles; Pearlmutter, Barak A

    2008-01-01

    This paper describes the control system of a next-generation optical brain-computer interface (BCI). Using functional near-infrared spectroscopy (fNIRS) as a BCI modality is a relatively new concept, and research has only begun to explore approaches for its implementation. It is necessary to have a system by which it is possible to investigate the signal processing and classification techniques available in the BCI community. Most importantly, these techniques must be easily testable in real-time applications. The system we describe was built using LABVIEW, a graphical programming language designed for interaction with National Instruments hardware. This platform allows complete configurability from hardware control and regulation, testing and filtering in a graphical interface environment.

  15. A polymorphic reconfigurable emulator for parallel simulation

    NASA Technical Reports Server (NTRS)

    Parrish, E. A., Jr.; Mcvey, E. S.; Cook, G.

    1980-01-01

    Microprocessor and arithmetic support chip technology was applied to the design of a reconfigurable emulator for real time flight simulation. The system developed consists of master control system to perform all man machine interactions and to configure the hardware to emulate a given aircraft, and numerous slave compute modules (SCM) which comprise the parallel computational units. It is shown that all parts of the state equations can be worked on simultaneously but that the algebraic equations cannot (unless they are slowly varying). Attempts to obtain algorithms that will allow parellel updates are reported. The word length and step size to be used in the SCM's is determined and the architecture of the hardware and software is described.

  16. Development of a Computer-Controlled Polishing Process for X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian

    2009-01-01

    The future X-ray observatory missions require grazing-incidence x-ray optics with angular resolution of < 5 arcsec half-power diameter. The achievable resolution depends ultimately on the quality of polished mandrels from which the shells are replicated. With an aim to fabricate better shells, and reduce the cost/time of mandrel production, a computer-controlled polishing machine is developed for deterministic and localized polishing of mandrels. Cylindrical polishing software is also developed that predicts the surface residual errors under a given set of operating parameters and lap configuration. Design considerations of the polishing lap are discussed and the effects of nonconformance of the lap and the mandrel are presented.

  17. 78 FR 23685 - Airworthiness Directives; The Boeing Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-22

    ... installing new operational software for the electrical load management system and configuration database. The..., installing a new electrical power control panel, and installing new operational software for the electrical load management system and configuration database. Since the proposed AD was issued, we have received...

  18. 48 CFR 227.7203-16 - Providing computer software or computer software documentation to foreign governments, foreign...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... software or computer software documentation to foreign governments, foreign contractors, or international... Rights in Computer Software and Computer Software Documentation 227.7203-16 Providing computer software or computer software documentation to foreign governments, foreign contractors, or international...

  19. 48 CFR 227.7203-16 - Providing computer software or computer software documentation to foreign governments, foreign...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... software or computer software documentation to foreign governments, foreign contractors, or international... Rights in Computer Software and Computer Software Documentation 227.7203-16 Providing computer software or computer software documentation to foreign governments, foreign contractors, or international...

  20. 48 CFR 227.7203-16 - Providing computer software or computer software documentation to foreign governments, foreign...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... software or computer software documentation to foreign governments, foreign contractors, or international... Rights in Computer Software and Computer Software Documentation 227.7203-16 Providing computer software or computer software documentation to foreign governments, foreign contractors, or international...

  1. 48 CFR 227.7203-16 - Providing computer software or computer software documentation to foreign governments, foreign...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... software or computer software documentation to foreign governments, foreign contractors, or international... Rights in Computer Software and Computer Software Documentation 227.7203-16 Providing computer software or computer software documentation to foreign governments, foreign contractors, or international...

  2. 48 CFR 227.7203-16 - Providing computer software or computer software documentation to foreign governments, foreign...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software or computer software documentation to foreign governments, foreign contractors, or international... Rights in Computer Software and Computer Software Documentation 227.7203-16 Providing computer software or computer software documentation to foreign governments, foreign contractors, or international...

  3. Mission Control Center (MCC) System Specification for the Shuttle Orbital Flight Test (OFT) Timeframe

    NASA Technical Reports Server (NTRS)

    1976-01-01

    System specifications to be used by the mission control center (MCC) for the shuttle orbital flight test (OFT) time frame were described. The three support systems discussed are the communication interface system (CIS), the data computation complex (DCC), and the display and control system (DCS), all of which may interfere with, and share processing facilities with other applications processing supporting current MCC programs. The MCC shall provide centralized control of the space shuttle OFT from launch through orbital flight, entry, and landing until the Orbiter comes to a stop on the runway. This control shall include the functions of vehicle management in the area of hardware configuration (verification), flight planning, communication and instrumentation configuration management, trajectory, software and consumables, payloads management, flight safety, and verification of test conditions/environment.

  4. Investigation of dynamic noise affecting geodynamics information in a tethered subsatellite

    NASA Technical Reports Server (NTRS)

    Gullahorn, G. E.

    1984-01-01

    The effects of a tethered satellite system's internal dynamics on the subsatellite were calculated including both overall motions (libration and attitude oscillations) and internal tether oscillations. The SKYHOOK tether simulation program was modified to operate with atmospheric density variations and to output quantities of interest. Techniques and software for analyzing the results were developed including noise spectral analysis. A program was begun for computing a stable configuration of a tether system subject to air drag. These configurations will be of use as initial conditions for SKYHOOK and, through linearized analysis, directly for stability and dynamical studies. A case study in which the subsatellite traverses an atmospheric density enhancement confirmed some theoretical calculations, and pointed out some aspects of the interaction with the tether system dynamics.

  5. A General Water Resources Regulation Software System in China

    NASA Astrophysics Data System (ADS)

    LEI, X.

    2017-12-01

    To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.

  6. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  7. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  8. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  9. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  10. 48 CFR 227.7203-3 - Early identification of computer software or computer software documentation to be furnished to...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computer software or computer software documentation to be furnished to the Government with restrictions on..., DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-3 Early identification of computer software or computer software documentation to be furnished to the Government with...

  11. Configuration management and software measurement in the Ground Systems Development Environment (GSDE)

    NASA Technical Reports Server (NTRS)

    Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo

    1992-01-01

    A set of functional requirements for software configuration management (CM) and metrics reporting for Space Station Freedom ground systems software are described. This report is one of a series from a study of the interfaces among the Ground Systems Development Environment (GSDE), the development systems for the Space Station Training Facility (SSTF) and the Space Station Control Center (SSCC), and the target systems for SSCC and SSTF. The focus is on the CM of the software following delivery to NASA and on the software metrics that relate to the quality and maintainability of the delivered software. The CM and metrics requirements address specific problems that occur in large-scale software development. Mechanisms to assist in the continuing improvement of mission operations software development are described.

  12. A case for spiking neural network simulation based on configurable multiple-FPGA systems.

    PubMed

    Yang, Shufan; Wu, Qiang; Li, Renfa

    2011-09-01

    Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.

  13. Propulsion Simulations with the Unstructured-Grid CFD Tool TetrUSS

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Pandya, Mohagna J.

    2002-01-01

    A computational investigation has been completed to assess the capability of the NASA Tetrahedral Unstructured Software System (TetrUSS) for simulation of exhaust nozzle flows. Three configurations were chosen for this study: (1) a fluidic jet effects model, (2) an isolated nacelle with a supersonic cruise nozzle, and (3) a fluidic pitchthrust- vectoring nozzle. These configurations were chosen because existing data provided a means for measuring the ability of the TetrUSS flow solver USM3D for simulating complex nozzle flows. Fluidic jet effects model simulations were compared with structured-grid CFD (computational fluid dynamics) data at Mach numbers from 0.3 to 1.2 at nozzle pressure ratios up to 7.2. Simulations of an isolated nacelle with a supersonic cruise nozzle were compared with wind tunnel experimental data and structured-grid CFD data at Mach numbers of 0.9 and 1.2, with a nozzle pressure ratio of 5. Fluidic pitch-thrust-vectoring nozzle simulations were compared with static experimental data and structured-grid CFD data at static freestream conditions and nozzle pressure ratios from 3 to 10. A fluidic injection case was computed with the third configuration at a nozzle pressure ratio of 4.6 and a secondary pressure ratio of 0.7. Results indicate that USM3D with the S-A turbulence model provides accurate exhaust nozzle simulations at on-design conditions, but does not predict internal shock location at overexpanded conditions or pressure recovery along a boattail at transonic conditions.

  14. AZ-101 Mixer Pump Demonstration Data Acquisition System and Gamma Cart Data Acquisition Control System Software Configuration Management Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WHITE, D.A.

    1999-12-29

    This Software Configuration Management Plan (SCMP) provides the instructions for change control of the AZ1101 Mixer Pump Demonstration Data Acquisition System (DAS) and the Sludge Mobilization Cart (Gamma Cart) Data Acquisition and Control System (DACS).

  15. Assessing the Effects of Multi-Node Sensor Network Configurations on the Operational Tempo

    DTIC Science & Technology

    2014-09-01

    receiver, nP is the noise power of the receiver, and iL is the implementation loss of the receiver due to hardware manufacturing. The received...13. ABSTRACT (maximum 200 words) The LPISimNet software tool provides the capability to quantify the performance of sensor network configurations by...INTENTIONALLY LEFT BLANK v ABSTRACT The LPISimNet software tool provides the capability to quantify the performance of sensor network configurations

  16. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  17. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  18. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  19. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  20. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  1. Virtual Diagnostics Interface: Real Time Comparison of Experimental Data and CFD Predictions for a NASA Ares I-Like Vehicle

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2007-01-01

    Virtual Diagnostics Interface technology, or ViDI, is a suite of techniques utilizing image processing, data handling and three-dimensional computer graphics. These techniques aid in the design, implementation, and analysis of complex aerospace experiments. LiveView3D is a software application component of ViDI used to display experimental wind tunnel data in real-time within an interactive, three-dimensional virtual environment. The LiveView3D software application was under development at NASA Langley Research Center (LaRC) for nearly three years. LiveView3D recently was upgraded to perform real-time (as well as post-test) comparisons of experimental data with pre-computed Computational Fluid Dynamics (CFD) predictions. This capability was utilized to compare experimental measurements with CFD predictions of the surface pressure distribution of the NASA Ares I Crew Launch Vehicle (CLV) - like vehicle when tested in the NASA LaRC Unitary Plan Wind Tunnel (UPWT) in December 2006 - January 2007 timeframe. The wind tunnel tests were conducted to develop a database of experimentally-measured aerodynamic performance of the CLV-like configuration for validation of CFD predictive codes.

  2. User's manual: Subsonic/supersonic advanced panel pilot code

    NASA Technical Reports Server (NTRS)

    Moran, J.; Tinoco, E. N.; Johnson, F. T.

    1978-01-01

    Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.

  3. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.

    PubMed

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  4. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models

    PubMed Central

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542

  5. 48 CFR 227.7203-8 - Deferred delivery and deferred ordering of computer software and computer software documentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... deferred ordering of computer software and computer software documentation. 227.7203-8 Section 227.7203-8... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-8 Deferred delivery and deferred ordering of computer software and computer...

  6. 48 CFR 227.7203-8 - Deferred delivery and deferred ordering of computer software and computer software documentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... deferred ordering of computer software and computer software documentation. 227.7203-8 Section 227.7203-8... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-8 Deferred delivery and deferred ordering of computer software and computer...

  7. 48 CFR 227.7203-8 - Deferred delivery and deferred ordering of computer software and computer software documentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... deferred ordering of computer software and computer software documentation. 227.7203-8 Section 227.7203-8... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-8 Deferred delivery and deferred ordering of computer software and computer...

  8. 48 CFR 227.7203-8 - Deferred delivery and deferred ordering of computer software and computer software documentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... deferred ordering of computer software and computer software documentation. 227.7203-8 Section 227.7203-8... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-8 Deferred delivery and deferred ordering of computer software and computer...

  9. 48 CFR 227.7203-8 - Deferred delivery and deferred ordering of computer software and computer software documentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... deferred ordering of computer software and computer software documentation. 227.7203-8 Section 227.7203-8... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-8 Deferred delivery and deferred ordering of computer software and computer...

  10. A prototype to automate the video subsystem routing for the video distribution subsystem of Space Station Freedom

    NASA Astrophysics Data System (ADS)

    Betz, Jessie M. Bethly

    1993-12-01

    The Video Distribution Subsystem (VDS) for Space Station Freedom provides onboard video communications. The VDS includes three major functions: external video switching; internal video switching; and sync and control generation. The Video Subsystem Routing (VSR) is a part of the VDS Manager Computer Software Configuration Item (VSM/CSCI). The VSM/CSCI is the software which controls and monitors the VDS equipment. VSR activates, terminates, and modifies video services in response to Tier-1 commands to connect video sources to video destinations. VSR selects connection paths based on availability of resources and updates the video routing lookup tables. This project involves investigating the current methodology to automate the Video Subsystem Routing and developing and testing a prototype as 'proof of concept' for designers.

  11. 48 CFR 252.227-7014 - Rights in noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computer software and noncommercial computer software documentation. 252.227-7014 Section 252.227-7014... Rights in noncommercial computer software and noncommercial computer software documentation. As prescribed in 227.7203-6(a)(1), use the following clause. Rights in Noncommercial Computer Software and...

  12. 48 CFR 252.227-7014 - Rights in noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computer software and noncommercial computer software documentation. 252.227-7014 Section 252.227-7014... Rights in noncommercial computer software and noncommercial computer software documentation. As prescribed in 227.7203-6(a)(1), use the following clause. Rights in Noncommercial Computer Software and...

  13. 48 CFR 252.227-7014 - Rights in noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computer software and noncommercial computer software documentation. 252.227-7014 Section 252.227-7014... Rights in noncommercial computer software and noncommercial computer software documentation. As prescribed in 227.7203-6(a)(1), use the following clause. Rights in Noncommercial Computer Software and...

  14. 48 CFR 252.227-7014 - Rights in noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computer software and noncommercial computer software documentation. 252.227-7014 Section 252.227-7014... Rights in noncommercial computer software and noncommercial computer software documentation. As prescribed in 227.7203-6(a)(1), use the following clause. Rights in Noncommercial Computer Software and...

  15. 48 CFR 252.227-7014 - Rights in noncommercial computer software and noncommercial computer software documentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computer software and noncommercial computer software documentation. 252.227-7014 Section 252.227-7014... Rights in noncommercial computer software and noncommercial computer software documentation. As prescribed in 227.7203-6(a)(1), use the following clause. Rights in Noncommercial Computer Software and...

  16. Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)

    NASA Technical Reports Server (NTRS)

    Niewoehner, Kevin R.; Carter, John (Technical Monitor)

    2001-01-01

    The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.

  17. Gaia DR1 documentation Chapter 6: Variability

    NASA Astrophysics Data System (ADS)

    Eyer, L.; Rimoldini, L.; Guy, L.; Holl, B.; Clementini, G.; Cuypers, J.; Mowlavi, N.; Lecoeur-Taïbi, I.; De Ridder, J.; Charnas, J.; Nienartowicz, K.

    2017-12-01

    This chapter describes the photometric variability processing of the Gaia DR1 data. Coordination Unit 7 is responsible for the variability analysis of over a billion celestial sources. In particular the definition, design, development, validation and provision of a software package for the data processing of photometrically variable objects. Data Processing Centre Geneva (DPCG) responsibilities cover all issues related to the computational part of the CU7 analysis. These span: hardware provisioning, including selection, deployment and optimisation of suitable hardware, choosing and developing software architecture, defining data and scientific workflows as well as operational activities such as configuration management, data import, time series reconstruction, storage and processing handling, visualisation and data export. CU7/DPCG is also responsible for interaction with other DPCs and CUs, software and programming training for the CU7 members, scientific software quality control and management of software and data lifecycle. Details about the specific data treatment steps of the Gaia DR1 data products are found in Eyer et al. (2017) and are not repeated here. The variability content of the Gaia DR1 focusses on a subsample of Cepheids and RR Lyrae stars around the South ecliptic pole, showcasing the performance of the Gaia photometry with respect to variable objects.

  18. Oxygen Generation System Laptop Bus Controller Flight Software

    NASA Technical Reports Server (NTRS)

    Rowe, Chad; Panter, Donna

    2009-01-01

    The Oxygen Generation System Laptop Bus Controller Flight Software was developed to allow the International Space Station (ISS) program to activate specific components of the Oxygen Generation System (OGS) to perform a checkout of key hardware operation in a microgravity environment, as well as to perform preventative maintenance operations of system valves during a long period of what would otherwise be hardware dormancy. The software provides direct connectivity to the OGS Firmware Controller with pre-programmed tasks operated by on-orbit astronauts to exercise OGS valves and motors. The software is used to manipulate the pump, separator, and valves to alleviate the concerns of hardware problems due to long-term inactivity and to allow for operational verification of microgravity-sensitive components early enough so that, if problems are found, they can be addressed before the hardware is required for operation on-orbit. The decision was made to use existing on-orbit IBM ThinkPad A31p laptops and MIL-STD-1553B interface cards as the hardware configuration. The software at the time of this reporting was developed and tested for use under the Windows 2000 Professional operating system to ensure compatibility with the existing on-orbit computer systems.

  19. Hardware support for software controlled fast reconfiguration of performance counters

    DOEpatents

    Salapura, Valentina; Wisniewski, Robert W.

    2013-06-18

    Hardware support for software controlled reconfiguration of performance counters may include a plurality of performance counters collecting one or more counts of one or more selected activities. A storage element stores data value representing a time interval, and a timer element reads the data value and detects expiration of the time interval based on the data value and generates a signal. A plurality of configuration registers stores a set of performance counter configurations. A state machine receives the signal and selects a configuration register from the plurality of configuration registers for reconfiguring the one or more performance counters.

  20. Hardware support for software controlled fast reconfiguration of performance counters

    DOEpatents

    Salapura, Valentina; Wisniewski, Robert W

    2013-09-24

    Hardware support for software controlled reconfiguration of performance counters may include a plurality of performance counters collecting one or more counts of one or more selected activities. A storage element stores data value representing a time interval, and a timer element reads the data value and detects expiration of the time interval based on the data value and generates a signal. A plurality of configuration registers stores a set of performance counter configurations. A state machine receives the signal and selects a configuration register from the plurality of configuration registers for reconfiguring the one or more performance counters.

  1. JTAG-based remote configuration of FPGAs over optical fibers

    DOE PAGES

    Deng, B.; Xu, H.; Liu, C.; ...

    2015-01-28

    In this study, a remote FPGA-configuration method based on JTAG extension over optical fibers is presented. The method takes advantage of commercial components and ready-to-use software such as iMPACT and does not require any hardware or software development. The method combines the advantages of the slow remote JTAG configuration and the fast local flash memory configuration. The method has been verified successfully and used in the Demonstrator of Liquid-Argon Trigger Digitization Board (LTDB) for the ATLAS liquid argon calorimeter Phase-I trigger upgrade. All components on the FPGA side are verified to meet the radiation tolerance requirements.

  2. Unidata cyberinfrastructure in the cloud: A progress report

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Mohan

    2016-04-01

    Data services, software, and committed support are critical components of geosciences cyber-infrastructure that can help scientists address problems of unprecedented complexity, scale, and scope. Unidata is currently working on innovative ideas, new paradigms, and novel techniques to complement and extend its offerings. Our goal is to empower users so that they can tackle major, heretofore difficult problems. Unidata recognizes that its products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. To realize the above vision, Unidata is working toward: * Providing access to many types of data from a cloud (e.g., TDS, RAMADDA and EDEX); * Deploying data-proximate tools to easily process, analyze and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Fostering partnerships with NOAA and public cloud vendors (e.g., Amazon) to harness their capabilities and resources for the benefit of the academic community.

  3. Development and Validation of an Interactive Liner Design and Impedance Modeling Tool

    NASA Technical Reports Server (NTRS)

    Howerton, Brian M.; Jones, Michael G.; Buckley, James L.

    2012-01-01

    The Interactive Liner Impedance Analysis and Design (ILIAD) tool is a LabVIEW-based software package used to design the composite surface impedance of a series of small-diameter quarter-wavelength resonators incorporating variable depth and sharp bends. Such structures are useful for packaging broadband acoustic liners into constrained spaces for turbofan engine noise control applications. ILIAD s graphical user interface allows the acoustic channel geometry to be drawn in the liner volume while the surface impedance and absorption coefficient calculations are updated in real-time. A one-dimensional transmission line model serves as the basis for the impedance calculation and can be applied to many liner configurations. Experimentally, tonal and broadband acoustic data were acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3000 Hz at 120 and 140 dB SPL. Normalized impedance spectra were measured using the Two-Microphone Method for the various combinations of channel configurations. Comparisons between the computed and measured impedances show excellent agreement for broadband liners comprised of multiple, variable-depth channels. The software can be used to design arrays of resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  4. A framework for stochastic simulations and visualization of biological electron-transfer dynamics

    NASA Astrophysics Data System (ADS)

    Nakano, C. Masato; Byun, Hye Suk; Ma, Heng; Wei, Tao; El-Naggar, Mohamed Y.

    2015-08-01

    Electron transfer (ET) dictates a wide variety of energy-conversion processes in biological systems. Visualizing ET dynamics could provide key insight into understanding and possibly controlling these processes. We present a computational framework named VizBET to visualize biological ET dynamics, using an outer-membrane Mtr-Omc cytochrome complex in Shewanella oneidensis MR-1 as an example. Starting from X-ray crystal structures of the constituent cytochromes, molecular dynamics simulations are combined with homology modeling, protein docking, and binding free energy computations to sample the configuration of the complex as well as the change of the free energy associated with ET. This information, along with quantum-mechanical calculations of the electronic coupling, provides inputs to kinetic Monte Carlo (KMC) simulations of ET dynamics in a network of heme groups within the complex. Visualization of the KMC simulation results has been implemented as a plugin to the Visual Molecular Dynamics (VMD) software. VizBET has been used to reveal the nature of ET dynamics associated with novel nonequilibrium phase transitions in a candidate configuration of the Mtr-Omc complex due to electron-electron interactions.

  5. Thermal Analysis of ISS Service Module Active TCS

    NASA Technical Reports Server (NTRS)

    Altov, Vladimir V.; Zaletaev, Sergey V.; Belyavskiy, Evgeniy P.

    2000-01-01

    ISS Service Module mission must begin in July 2000. The verification of design thermal requirements is mostly due to thermal analysis. The thermal analysis is enough difficult problem because of large number of ISS configurations that had to be investigated and various orbital environments. Besides the ISS structure has articulating parts such as solar arrays and radiators. The presence of articulating parts greatly increases computation times and requires accurate approach to organization of calculations. The varying geometry needs us to calculate the view factors several times during the orbit, while in static geometry case we need do it only once. In this paper we consider the thermal mathematical model of SM that includes the TCS and construction thermal models and discuss the results of calculations for ISS configurations 1R and 9Al. The analysis is based on solving the nodal heat balance equations for ISS structure by Kutta-Merson method and analytical solutions of heat transfer equations for TCS units. The computations were performed using thermal software TERM [1,2] that will be briefly described.

  6. Replicated computational results (RCR) report for "BLIS: A framework for rapidly instantiating BLAS functionality"

    DOE PAGES

    Willenbring, James Michael

    2015-06-03

    “BLIS: A Framework for Rapidly Instantiating BLAS Functionality” includes single-platform BLIS performance results for both level-2 and level-3 operations that is competitive with OpenBLAS, ATLAS, and Intel MKL. A detailed description of the configuration used to generate the performance results was provided to the reviewer by the authors. All the software components used in the comparison were reinstalled and new performance results were generated and compared to the original results. After completing this process, the published results are deemed replicable by the reviewer.

  7. Remote manipulator system flexibility analysis program: Mission planning, mission analysis, and software formulation

    NASA Technical Reports Server (NTRS)

    Kumar, L.

    1978-01-01

    A computer program is described for calculating the flexibility coefficients as arm design changes are made for the remote manipulator system. The coefficients obtained are required as input for a second program which reduces the number of payload deployment and retrieval system simulation runs required to simulate the various remote manipulator system maneuvers. The second program calculates end effector flexibility and joint flexibility terms for the torque model of each joint for any arbitrary configurations. The listing of both programs is included in the appendix.

  8. System For Research On Multiple-Arm Robots

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Hayati, Samad; Tso, Kam S.; Hayward, Vincent

    1991-01-01

    Kali system of computer programs and equipment provides environment for research on distributed programming and distributed control of coordinated-multiple-arm robots. Suitable for telerobotics research involving sensing and execution of low level tasks. Software and configuration of hardware designed flexible so system modified easily to test various concepts in control and programming of robots, including multiple-arm control, redundant-arm control, shared control, traded control, force control, force/position hybrid control, design and integration of sensors, teleoperation, task-space description and control, methods of adaptive control, control of flexible arms, and human factors.

  9. Smartphone Microscopy of Parasite Eggs Accumulated into a Single Field of View.

    PubMed

    Sowerby, Stephen J; Crump, John A; Johnstone, Maree C; Krause, Kurt L; Hill, Philip C

    2016-01-01

    A Nokia Lumia 1020 cellular phone (Microsoft Corp., Auckland, New Zealand) was configured to image the ova of Ascaris lumbricoides converged into a single field of view but on different focal planes. The phone was programmed to acquire images at different distances and, using public domain computer software, composite images were created that brought all the eggs into sharp focus. This proof of concept informs a framework for field-deployable, point of care monitoring of soil-transmitted helminths. © The American Society of Tropical Medicine and Hygiene.

  10. Development of advanced avionics systems applicable to terminal-configured vehicles

    NASA Technical Reports Server (NTRS)

    Heimbold, R. L.; Lee, H. P.; Leffler, M. F.

    1980-01-01

    A technique to add the time constraint to the automatic descent feature of the existing L-1011 aircraft Flight Management System (FMS) was developed. Software modifications were incorporated in the FMS computer program and the results checked by lab simulation and on a series of eleven test flights. An arrival time dispersion (2 sigma) of 19 seconds was achieved. The 4 D descent technique can be integrated with the time-based metering method of air traffic control. Substantial reductions in delays at today's busy airports should result.

  11. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    PubMed

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  12. First year of ALMA site software deployment: where everything comes together

    NASA Astrophysics Data System (ADS)

    González, Víctor; Mora, Matias; Araya, Rodrigo; Arredondo, Diego; Bartsch, Marcelo; Burgos, Pablo; Ibsen, Jorge; Reveco, Johnny; Sáez, Norman; Schemrl, Anton; Sepulveda, Jorge; Shen, Tzu-Chiang; Soto, Rubén; Troncoso, Nicolás; Zambrano, Mauricio; Barriga, Nicolás; Glendenning, Brian; Raffi, Gianni; Kern, Jeff

    2010-07-01

    Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front line of the project's software deployment and integration effort. Among the group's main responsibilities are the deployment, configuration and support of the observation systems, in addition to infrastructure administration, all of which needs to be done in close coordination with the development groups in Europe, North America and Japan. Software support has been the primary interaction key with the current users (mainly scientists, operators and hardware engineers), as the software is normally the most visible part of the system. During this first year of work with the production hardware, three consecutive software releases have been deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at 5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the experience of this 15-people group as part of the construction team at the ALMA site, and working together with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent results of teamwork, and also some of the troubles that such a complex and geographically distributed project can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations plan.

  13. Alloy Design Workbench-Surface Modeling Package Developed

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.; Noebe, Ronald D.; Bozzolo, Guillermo H.; Good, Brian S.; Daugherty, Elaine S.

    2003-01-01

    NASA Glenn Research Center's Computational Materials Group has integrated a graphical user interface with in-house-developed surface modeling capabilities, with the goal of using computationally efficient atomistic simulations to aid the development of advanced aerospace materials, through the modeling of alloy surfaces, surface alloys, and segregation. The software is also ideal for modeling nanomaterials, since surface and interfacial effects can dominate material behavior and properties at this level. Through the combination of an accurate atomistic surface modeling methodology and an efficient computational engine, it is now possible to directly model these types of surface phenomenon and metallic nanostructures without a supercomputer. Fulfilling a High Operating Temperature Propulsion Components (HOTPC) project level-I milestone, a graphical user interface was created for a suite of quantum approximate atomistic materials modeling Fortran programs developed at Glenn. The resulting "Alloy Design Workbench-Surface Modeling Package" (ADW-SMP) is the combination of proven quantum approximate Bozzolo-Ferrante-Smith (BFS) algorithms (refs. 1 and 2) with a productivity-enhancing graphical front end. Written in the portable, platform independent Java programming language, the graphical user interface calls on extensively tested Fortran programs running in the background for the detailed computational tasks. Designed to run on desktop computers, the package has been deployed on PC, Mac, and SGI computer systems. The graphical user interface integrates two modes of computational materials exploration. One mode uses Monte Carlo simulations to determine lowest energy equilibrium configurations. The second approach is an interactive "what if" comparison of atomic configuration energies, designed to provide real-time insight into the underlying drivers of alloying processes.

  14. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  15. Bio-Docklets: virtualization containers for single-step execution of NGS pipelines.

    PubMed

    Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis; Krampis, Konstantinos

    2017-08-01

    Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a "meta-script" that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. © The Authors 2017. Published by Oxford University Press.

  16. Bio-Docklets: virtualization containers for single-step execution of NGS pipelines

    PubMed Central

    Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis

    2017-01-01

    Abstract Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a “meta-script” that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. PMID:28854616

  17. Computational fluid dynamic evaluation of the side-to-side anastomosis for arteriovenous fistula.

    PubMed

    Hull, Jeffrey E; Balakin, Boris V; Kellerman, Brad M; Wrolstad, David K

    2013-07-01

    The goal of this research was to compare side-to-side (STS) and end-to-side (ETS) anastomoses in a computer model of the arteriovenous fistula with computational fluid dynamic analysis. A matrix of 17 computer arteriovenous fistula models (SolidWorks, Dassault Systèmes, France) of artery-vein pairs (3-mm-diameter artery + 3-mm-diameter vein and 4-mm-diameter artery +6-mm-diameter vein elliptical anastomoses) in STS, 45° ETS, and 90° ETS configurations with cross-sectional areas (CSAs) of 3.5 to 18.8 mm(2) were evaluated with computational fluid dynamic software (STAR-CCM+; CD-adapco, Melville, NY) in simulations at defined flow rates from 600 to 1200 mL/min and mean arterial pressures of 50 to 140 mm Hg. Models and configurations were evaluated for pressure drop across the anastomosis, arterial inflow, venous outflow, arterial outflow, velocity vector, and wall shear stress (WSS) profile. Pressure drop across the anastomosis was inversely proportional to anastomotic CSA and to venous outflow and was proportional to arterial inflow. Pressure drop was greater in 3 + 3 models than in 4 + 6 STS models; 90° ETS configurations had the lowest pressure drops and were nearly identical, whereas 45° ETS configurations had the highest pressure drops. Venous outflow in the 4 + 6 model in STS configurations, evaluated at 100 mm Hg arterial inflow pressure, was 390, 592, 610, and 886 mL/min in anastomotic CSAs of 3.5, 5.3, 7.1, and 18.8 mm(2), respectively, and was similar in 90° ETS (609 and 908 mL/min) and lower in 45° ETS (534 and 562 mL/min) configurations at CSAs of 5.3 and 18.8 mm(2). The mean increase in venous outflow was 69 mL/min (range, -59 to 134) between 3 + 3 and 4 + 6 models at 100 mm Hg arterial inflow. The most uniform WSS profile occurs in STS anastomoses followed by 45° ETS and then 90° ETS anastomoses. The STS and 90° ETS anastomoses have high venous outflow and a tendency toward reversed arterial outflow. The 45° ETS anastomosis has reduced venous outflow but resists reversed arterial outflow. The STS anastomosis has more uniform WSS characteristics compared with the 45° and 90° ETS anastomoses. Copyright © 2013 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  18. BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs

    PubMed Central

    Eklund, Anders; Dufort, Paul; Villani, Mattias; LaConte, Stephen

    2014-01-01

    Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/). PMID:24672471

  19. 76 FR 5503 - Airworthiness Directives; The Boeing Company Model 777-200 Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-01

    ... software, as applicable; and making a change to the cabin services system (CSS) configuration database and... software, as applicable; and make a change to the cabin services system (CSS) configuration database and... 5 p.m., Monday through Friday, except Federal holidays. For service information identified in this...

  20. Remote control system for high-perfomance computer simulation of crystal growth by the PFC method

    NASA Astrophysics Data System (ADS)

    Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.

  1. Low-Reynolds Number Aerodynamics of an 8.9 Percent Scale Semispan Swept Wing for Assessment of Icing Effects

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Woodard, Brian S.; Diebold, Jeffrey M.; Moens, Frederic

    2017-01-01

    Aerodynamic assessment of icing effects on swept wings is an important component of a larger effort to improve three-dimensional icing simulation capabilities. An understanding of ice-shape geometric fidelity and Reynolds and Mach number effects on the iced-wing aerodynamics is needed to guide the development and validation of ice-accretion simulation tools. To this end, wind-tunnel testing and computational flow simulations were carried out for an 8.9%-scale semispan wing based upon the Common Research Model airplane configuration. The wind-tunnel testing was conducted at the Wichita State University 7 ft x 10 ft Beech wind tunnel from Reynolds numbers of 0.8×10(exp 6) to 2.4×10(exp 6) and corresponding Mach numbers of 0.09 to 0.27. This paper presents the results of initial studies investigating the model mounting configuration, clean-wing aerodynamics and effects of artificial ice roughness. Four different model mounting configurations were considered and a circular splitter plate combined with a streamlined shroud was selected as the baseline geometry for the remainder of the experiments and computational simulations. A detailed study of the clean-wing aerodynamics and stall characteristics was made. In all cases, the flow over the outboard sections of the wing separated as the wing stalled with the inboard sections near the root maintaining attached flow. Computational flow simulations were carried out with the ONERA elsA software that solves the compressible, three-dimensional RANS equations. The computations were carried out in either fully turbulent mode or with natural transition. Better agreement between the experimental and computational results was obtained when considering computations with free transition compared to turbulent solutions. These results indicate that experimental evolution of the clean wing performance coefficients were due to the effect of three-dimensional transition location and that this must be taken into account for future data analysis. This research also confirmed that artificial ice roughness created with rapid-prototype manufacturing methods can generate aerodynamic performance effects comparable to grit roughness of equivalent size when proper care is exercised in design and installation. The conclusions of this combined experimental and computational study contributed directly to the successful implementation of follow-on test campaigns with numerous artificial ice-shape configurations for this 8.9% scale model.

  2. Low-Reynolds Number Aerodynamics of an 8.9 Percent Scale Semispan Swept Wing for Assessment of Icing Effects

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Woodard, Brian S.; Diebold, Jeffrey M.; Moens, Frederic

    2017-01-01

    Aerodynamic assessment of icing effects on swept wings is an important component of a larger effort to improve three-dimensional icing simulation capabilities. An understanding of ice-shape geometric fidelity and Reynolds and Mach number effects on the iced-wing aerodynamics is needed to guide the development and validation of ice-accretion simulation tools. To this end, wind-tunnel testing and computational flow simulations were carried out for an 8.9 percent-scale semispan wing based upon the Common Research Model airplane configuration. The wind-tunnel testing was conducted at the Wichita State University 7 by 10 ft Beech wind tunnel from Reynolds numbers of 0.8×10(exp 6) to 2.4×10(exp 6) and corresponding Mach numbers of 0.09 to 0.27. This paper presents the results of initial studies investigating the model mounting configuration, clean-wing aerodynamics and effects of artificial ice roughness. Four different model mounting configurations were considered and a circular splitter plate combined with a streamlined shroud was selected as the baseline geometry for the remainder of the experiments and computational simulations. A detailed study of the clean-wing aerodynamics and stall characteristics was made. In all cases, the flow over the outboard sections of the wing separated as the wing stalled with the inboard sections near the root maintaining attached flow. Computational flow simulations were carried out with the ONERA elsA software that solves the compressible, threedimensional RANS equations. The computations were carried out in either fully turbulent mode or with natural transition. Better agreement between the experimental and computational results was obtained when considering computations with free transition compared to turbulent solutions. These results indicate that experimental evolution of the clean wing performance coefficients were due to the effect of three-dimensional transition location and that this must be taken into account for future data analysis. This research also confirmed that artificial ice roughness created with rapid-prototype manufacturing methods can generate aerodynamic performance effects comparable to grit roughness of equivalent size when proper care is exercised in design and installation. The conclusions of this combined experimental and computational study contributed directly to the successful implementation of follow-on test campaigns with numerous artificial ice-shape configurations for this 8.9 percent scale model.

  3. Comparison of effects of different screw materials in the triangle fixation of femoral neck fractures.

    PubMed

    Gok, Kadir; Inal, Sermet; Gok, Arif; Gulbandilar, Eyyup

    2017-05-01

    In this study, biomechanical behaviors of three different screw materials (stainless steel, titanium and cobalt-chromium) have analyzed to fix with triangle fixation under axial loading in femoral neck fracture and which material is best has been investigated. Point cloud obtained after scanning the human femoral model with the three dimensional (3D) scanner and this point cloud has been converted to 3D femoral model by Geomagic Studio software. Femoral neck fracture was modeled by SolidWorks software for only triangle configuration and computer-aided numerical analyses of three different materials have been carried out by AnsysWorkbench finite element analysis (FEA) software. The loading, boundary conditions and material properties have prepared for FEA and Von-Misses stress values on upper and lower proximity of the femur and screws have been calculated. At the end of numerical analyses, the best advantageous screw material has calculated as titanium because it creates minimum stress at the upper and lower proximity of the fracture line.

  4. Adaptation of a software development methodology to the implementation of a large-scale data acquisition and control system. [for Deep Space Network

    NASA Technical Reports Server (NTRS)

    Madrid, G. A.; Westmoreland, P. T.

    1983-01-01

    A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.

  5. JACOB: an enterprise framework for computational chemistry.

    PubMed

    Waller, Mark P; Dresselhaus, Thomas; Yang, Jack

    2013-06-15

    Here, we present just a collection of beans (JACOB): an integrated batch-based framework designed for the rapid development of computational chemistry applications. The framework expedites developer productivity by handling the generic infrastructure tier, and can be easily extended by user-specific scientific code. Paradigms from enterprise software engineering were rigorously applied to create a scalable, testable, secure, and robust framework. A centralized web application is used to configure and control the operation of the framework. The application-programming interface provides a set of generic tools for processing large-scale noninteractive jobs (e.g., systematic studies), or for coordinating systems integration (e.g., complex workflows). The code for the JACOB framework is open sourced and is available at: www.wallerlab.org/jacob. Copyright © 2013 Wiley Periodicals, Inc.

  6. [Parallel virtual reality visualization of extreme large medical datasets].

    PubMed

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  7. Pulse Code Modulation (PCM) data storage and analysis using a microcomputer

    NASA Technical Reports Server (NTRS)

    Massey, D. E.

    1986-01-01

    A PCM storage device/data analyzer is described. This instrument is a peripheral plug-in board especially built to enable a personal computer to store and analyze data from a PCM source. This board and custom written software turns a computer into a snapshot PCM decommutator. This instrument will take in and store many hundreds or thousands of PCM telemetry data frames, then sift through them over and over again. The data can be converted to any number base and displayed, examined for any bit dropouts or changes in particular words or frames, graphically plotted, or statistically analyzed. This device was designed and built for use on the NASA Sounding Rocket Program for PCM encoder configuration and testing.

  8. Project WISH: The Emerald City

    NASA Technical Reports Server (NTRS)

    Oz, Hayrani; Slonksnes, Linda (Editor); Rogers, James W. (Editor); Sherer, Scott E. (Editor); Strosky, Michelle A. (Editor); Szmerekovsky, Andrew G. (Editor); Klupar, G. Joseph (Editor)

    1990-01-01

    The preliminary design of a permanently manned autonomous space oasis (PEMASO), including its pertinent subsystems, was performed during the 1990 Winter and Spring quarters. The purpose for the space oasis was defined and the preliminary design work was started with emphasis placed on the study of orbital mechanics, power systems and propulsion systems. A rotating torus was selected as the preliminary configuration, and overall size, mass and location of some subsystems within the station were addressed. Computer software packages were utilized to determine station transfer parameters and thus the preliminary propulsion requirements. Power and propulsion systems were researched to determine feasible configurations and many conventional schemes were ruled out. Vehicle dynamics and control, mechanical and life support systems were also studied. For each subsystem studied, the next step in the design process to be performed during the continuation of the project was also addressed.

  9. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  10. Cloud Computing Value Chains: Understanding Businesses and Value Creation in the Cloud

    NASA Astrophysics Data System (ADS)

    Mohammed, Ashraf Bany; Altmann, Jörn; Hwang, Junseok

    Based on the promising developments in Cloud Computing technologies in recent years, commercial computing resource services (e.g. Amazon EC2) or software-as-a-service offerings (e.g. Salesforce. com) came into existence. However, the relatively weak business exploitation, participation, and adoption of other Cloud Computing services remain the main challenges. The vague value structures seem to be hindering business adoption and the creation of sustainable business models around its technology. Using an extensive analyze of existing Cloud business models, Cloud services, stakeholder relations, market configurations and value structures, this Chapter develops a reference model for value chains in the Cloud. Although this model is theoretically based on porter's value chain theory, the proposed Cloud value chain model is upgraded to fit the diversity of business service scenarios in the Cloud computing markets. Using this model, different service scenarios are explained. Our findings suggest new services, business opportunities, and policy practices for realizing more adoption and value creation paths in the Cloud.

  11. Low Power, Low Mass, Modular, Multi-band Software-defined Radios

    NASA Technical Reports Server (NTRS)

    Haskins, Christopher B. (Inventor); Millard, Wesley P. (Inventor)

    2013-01-01

    Methods and systems to implement and operate software-defined radios (SDRs). An SDR may be configured to perform a combination of fractional and integer frequency synthesis and direct digital synthesis under control of a digital signal processor, which may provide a set of relatively agile, flexible, low-noise, and low spurious, timing and frequency conversion signals, and which may be used to maintain a transmit path coherent with a receive path. Frequency synthesis may include dithering to provide additional precision. The SDR may include task-specific software-configurable systems to perform tasks in accordance with software-defined parameters or personalities. The SDR may include a hardware interface system to control hardware components, and a host interface system to provide an interface to the SDR with respect to a host system. The SDR may be configured for one or more of communications, navigation, radio science, and sensors.

  12. Investigation of advancing front method for generating unstructured grid

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1992-01-01

    The advancing front technique is used to generate an unstructured grid about simple aerodynamic geometries. Unstructured grids are generated using VGRID2D and VGRID3D software. Specific problems considered are a NACA 0012 airfoil, a bi-plane consisting of two NACA 0012 airfoil, a four element airfoil in its landing configuration, and an ONERA M6 wing. Inviscid time dependent solutions are computed on these geometries using USM3D and the results are compared with standard test results obtained by other investigators. A grid convergence study is conducted for the NACA 0012 airfoil and compared with a structured grid. A structured grid is generated using GRIDGEN software and inviscid solutions computed using CFL3D flow solver. The results obtained by unstructured grid for NACA 0012 airfoil showed an asymmetric distribution of flow quantities, and a fine distribution of grid was required to remove this asymmetry. On the other hand, the structured grid predicted a very symmetric distribution, but when the total number of points were compared to obtain the same results it was seen that structured grid required more grid points.

  13. 48 CFR 227.7202-4 - Contract clause.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Software and Computer Software Documentation 227.7202-4 Contract clause. A specific contract clause governing the Government's rights in commercial computer software or commercial computer software..., release, perform, display, or disclose computer software or computer software documentation shall be...

  14. 48 CFR 227.7202-4 - Contract clause.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Software and Computer Software Documentation 227.7202-4 Contract clause. A specific contract clause governing the Government's rights in commercial computer software or commercial computer software..., release, perform, display, or disclose computer software or computer software documentation shall be...

  15. 48 CFR 227.7202-4 - Contract clause.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Software and Computer Software Documentation 227.7202-4 Contract clause. A specific contract clause governing the Government's rights in commercial computer software or commercial computer software..., release, perform, display, or disclose computer software or computer software documentation shall be...

  16. 48 CFR 227.7202-4 - Contract clause.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Software and Computer Software Documentation 227.7202-4 Contract clause. A specific contract clause governing the Government's rights in commercial computer software or commercial computer software..., release, perform, display, or disclose computer software or computer software documentation shall be...

  17. 48 CFR 227.7202-4 - Contract clause.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Software and Computer Software Documentation 227.7202-4 Contract clause. A specific contract clause governing the Government's rights in commercial computer software or commercial computer software..., release, perform, display, or disclose computer software or computer software documentation shall be...

  18. Improving INPE'S balloon ground facilities for operation of the protoMIRAX experiment

    NASA Astrophysics Data System (ADS)

    Mattiello-Francisco, F.; Rinke, E.; Fernandes, J. O.; Cardoso, L.; Cardoso, P.; Braga, J.

    2014-10-01

    The system requirements for reusing the scientific balloon ground facilities available at INPE were a challenge to the ground system engineers involved in the protoMIRAX X-ray astronomy experiment. A significant effort on software updating was required for the balloon ground station. Considering that protoMIRAX is a pathfinder for the MIRAX satellite mission, a ground infrastructure compatible with INPE's satellite operation approach would be useful and highly recommended to control and monitor the experiment during the balloon flights. This approach will make use of the SATellite Control System (SATCS), a software-based architecture developed at INPE for satellite commanding and monitoring. SATCS complies with particular operational requirements of different satellites by using several customized object-oriented software elements and frameworks. We present the ground solution designed for protoMIRAX operation, the Control and Reception System (CRS). A new server computer, properly configured with Ethernet, has extended the existing ground station facilities with switch, converters and new software (OPS/SERVER) in order to support the available uplink and downlink channels being mapped to TCP/IP gateways required by SATCS. Currently, the CRS development is customizing the SATCS for the kernel functions of protoMIRAX command and telemetry processing. Design-patterns, component-based libraries and metadata are widely used in the SATCS in order to extend the frameworks to address the Packet Utilization Standard (PUS) for ground-balloon communication, in compliance with the services provided by the data handling computer onboard the protoMIRAX balloon.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germain, Shawn

    Nuclear Power Plant (NPP) refueling outages create some of the most challenging activities the utilities face in both tracking and coordinating thousands of activities in a short period of time. Other challenges, including nuclear safety concerns arising from atypical system configurations and resource allocation issues, can create delays and schedule overruns, driving up outage costs. Today the majority of the outage communication is done using processes that do not take advantage of advances in modern technologies that enable enhanced communication, collaboration and information sharing. Some of the common practices include: runners that deliver paper-based requests for approval, radios, telephones, desktopmore » computers, daily schedule printouts, and static whiteboards that are used to display information. Many gains have been made to reduce the challenges facing outage coordinators; however; new opportunities can be realized by utilizing modern technological advancements in communication and information tools that can enhance the collective situational awareness of plant personnel leading to improved decision-making. Ongoing research as part of the Light Water Reactor Sustainability Program (LWRS) has been targeting NPP outage improvement. As part of this research, various applications of collaborative software have been demonstrated through pilot project utility partnerships. Collaboration software can be utilized as part of the larger concept of Computer-Supported Cooperative Work (CSCW). Collaborative software can be used for emergent issue resolution, Outage Control Center (OCC) displays, and schedule monitoring. Use of collaboration software enables outage staff and subject matter experts (SMEs) to view and update critical outage information from any location on site or off.« less

  20. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less

Top