Science.gov

Sample records for computer operating systems

  1. Operating systems. [of computers

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  2. Mission operations computing systems evolution

    NASA Technical Reports Server (NTRS)

    Kurzhals, P. R.

    1981-01-01

    As part of its preparation for the operational Shuttle era, the Goddard Space Flight Center (GSFC) is currently replacing most of the mission operations computing complexes that have supported near-earth space missions since the late 1960's. Major associated systems include the Metric Data Facility (MDF) which preprocesses, stores, and forwards all near-earth satellite tracking data; the Orbit Computation System (OCS) which determines related production orbit and attitude information; the Flight Dynamics System (FDS) which formulates spacecraft attitude and orbit maneuvers; and the Command Management System (CMS) which handles mission planning, scheduling, and command generation and integration. Management issues and experiences for the resultant replacement process are driven by a wide range of possible future mission requirements, flight-critical system aspects, complex internal system interfaces, extensive existing applications software, and phasing to optimize systems evolution.

  3. Automated validation of a computer operating system

    NASA Technical Reports Server (NTRS)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  4. Software fault tolerance in computer operating systems

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  5. Personal Computer and Workstation Operating Systems Tutorial

    DTIC Science & Technology

    1994-03-01

    their: history of development, process management , file system, input and output system, user interface, network capabilities, and advantages and...degree of MASTER OF SCIENCE IN INFORMATION TECHNOLOGY MANAGEMENT from the NAVAL POSTGRADUATE SCHOOL March 1994 Author: ._- Cjy.- Charles E. Frame Jr...their: history of development, process management , file system, input and output system, user interface, network capabilities, and advantages and

  6. An operating system for future aerospace vehicle computer systems

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  7. Computer Systems Operator, A Suggested Instructor's Guide.

    ERIC Educational Resources Information Center

    Institute of Computer Technology, Washington, DC.

    School administrators, teachers, and businessmen will find this work-study guide useful in developing courses to teach the disadvantaged to be operators of automatic data processing equipment. Fourteen course units cover fundamental principles of programing, specific programs such as FORTRAN and COBOL, and the skills required for the position.…

  8. Demonstrating Operating System Principles via Computer Forensics Exercises

    ERIC Educational Resources Information Center

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram

    2010-01-01

    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  9. The VICKSI computer control system, concept and operating experience

    NASA Astrophysics Data System (ADS)

    Busse, W.; Kluge, H.; Ziegler, K.

    1981-05-01

    A description of the VICKSI computer-control system is given. It uses CAMAC modules as unique interface between accelerator devices and the computer. Through a high degree of standardisation only seven different types of CAMAC modules are needed to control the accelerator facility. The idea of having one module control one accelerator device minimizes the cabling and also the software requirements. The operation of the control system has proved to be very reliable causing less than 2% down time of the facility.

  10. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  11. Computing Operating Characteristics Of Bearing/Shaft Systems

    NASA Technical Reports Server (NTRS)

    Moore, James D.

    1996-01-01

    SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.

  12. Computer systems for controlling blast furnace operations at Rautaruukki

    SciTech Connect

    Inkala, P.; Karppinen, A.; Seppanen, M.

    1995-08-01

    Energy accounts for a significant portion of the total blast furnace production costs and, to minimize energy consumption, both technical and economical aspects have to be considered. Thus, considerable attention has been paid to blast furnace energy consumption and productivity. The most recent furnace relines were in 1985 and 1986. At that time, the furnaces were modernized and instrumentation was increased. After the relines, operation control and monitoring of the process is done by a basic automation systems (DCS`s and PLC`s) and a supervision system (process computer). The supervision system is the core of the control system combining reports, special displays, trends and mathematical models describing in-furnace phenomena. Low energy consumption together with high productivity and stable blast furnace operation have been achieved due to an improvement in raw materials quality and implementation of automation and computer systems to control blast furnace operation. Currently, the fuel rate is low and productivity is in excess of 3.0 tonnes/cu meter/day, which is one of the highest values achieved anywhere for long-term operation.

  13. Computer-aided radio dispatch system streamlines operations

    SciTech Connect

    Meck, G.L.

    1985-10-01

    This paper describes a computer-aided radio dispatch system (CARDS) used by The East Ohio Gas Company to help improve customer satisfaction and the already high level of performance in customer service operations. East Ohio decided to develop its own system after establishing certain criteria. The heart of the Cards unit is the DEC microcomputer LSI-11 where data is transmitted between it and the dispatch centers at 1,200 baud. The large number of job functions that the system helps fulfill are discussed in this paper.

  14. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  15. An Operational Computational Terminal Area PBL Prediction System

    NASA Technical Reports Server (NTRS)

    Lin, Yuh-Lang; Kaplan, Michael L.

    1998-01-01

    There are two fundamental goals of this research project which are listed here in terms of priority, i.e., a primary and secondary goal. The first and primary goal is to develop a prognostic system which could satisfy the operational weather prediction requirements of the meteorological subsystem within the Aircraft Vortex Spacing System (AVOSS), i.e., an operational computational Terminal Area PBL Prediction System (TAPPS). The second goal is to perform indepth diagnostic analyses of the meteorological conditions during the special wake vortex deployments at Memphis and Dallas during August 95 and September 97, respectively. These two goals are interdependent because a thorough understanding of the atmospheric dynamical processes which produced the unique meteorology during the Memphis and Dallas deployments will help us design a prognostic system for the planetary boundary layer (PBL) which could be utilized to support the meteorological subsystem within AVOSS. Concerning the primary goal, TAPPS Stage 2 was tested on the Memphis data and is about to be tested on the Dallas case studies. Furthermore benchmark tests have been undertaken to select the appropriate platform to run TAPPS in real time in support of the DFW AVOSS system. In addition, a technique to improve the initial data over the region surrounding Dallas was also tested and modified for potential operational use in TAPPS. The secondary goal involved several sensitivity simulations and comparisons to Memphis observational data sets in an effort to diagnose what specific atmospheric phenomena where occurring which may have impacted the dynamics of atmospheric wake vortices.

  16. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  17. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  18. A new taxonomy for distributed computer systems based upon operating system structure

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1985-01-01

    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.

  19. Computer-aided operations engineering with integrated models of systems and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  20. Operating Systems.

    ERIC Educational Resources Information Center

    Denning, Peter J.; Brown, Robert L.

    1984-01-01

    A computer operating system spans multiple layers of complexity, from commands entered at a keyboard to the details of electronic switching. In addition, the system is organized as a hierarchy of abstractions. Various parts of such a system and system dynamics (using the Unix operating system as an example) are described. (JN)

  1. Object migration and authentication. [in computer operating systems design

    NASA Technical Reports Server (NTRS)

    Gligor, V. D.; Lindsay, B. G.

    1979-01-01

    The paper presents a mechanism permitting a type manager to fabricate a migrated object representation which can be entrusted to other subsystems or transmitted outside of the control of a local computer system. The migrated object representation is signed by the type manager in such a way that the type manager's signature cannot be forged and the manager is able to authenticate its own signature. Subsequently, the type manager can retrieve the migrated representation and validate its contents before reconstructing the object in its original representation. This facility allows type managers to authenticate the contents of off-line or network storage and solves problems stemming from the hierarchical structure of the system itself.

  2. Intelligent operating systems for autonomous robots: Real-time capabilities on a hypercube super-computer

    SciTech Connect

    Einstein, J.R.; Barhen, J.; Jefferson, D.

    1986-01-01

    Autonomous robots which must perform time-critical tasks in hostile environments require computers which can perform many asynchronous tasks at extremely high speeds. Certain hypercube multiprocessors have many of the required attributes, but their operating systems must be provided with special functions to improve the capability of the system to respond rapidly to unpredictable events. A ''virtual-time'' shell, under design for addition to the Vertex operating system of the NCUBE hypercube computer, and having such capabilities, is described.

  3. CP/M: A Family of 8- and 16-Bit Computer Operating Systems.

    ERIC Educational Resources Information Center

    Kildall, Gary

    1982-01-01

    Traces the development of the computer CP/M (Control Program for Microcomputers) and MP/M (Multiprogramming Monitor Microcomputers) operating system by Gary Kildall of Digital Research Company. Discusses the adaptation of these operating systems to the newly emerging 16 and 32 bit microprocessors. (Author/LC)

  4. Coordinate Systems, Numerical Objects and Algorithmic Operations of Computational Experiment in Fluid Mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily

    2016-02-01

    The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.

  5. Computer algebra and operators

    NASA Technical Reports Server (NTRS)

    Fateman, Richard; Grossman, Robert

    1989-01-01

    The symbolic computation of operator expansions is discussed. Some of the capabilities that prove useful when performing computer algebra computations involving operators are considered. These capabilities may be broadly divided into three areas: the algebraic manipulation of expressions from the algebra generated by operators; the algebraic manipulation of the actions of the operators upon other mathematical objects; and the development of appropriate normal forms and simplification algorithms for operators and their actions. Brief descriptions are given of the computer algebra computations that arise when working with various operators and their actions.

  6. Computer controlled operation of a two-engine xenon ion propulsion system

    NASA Technical Reports Server (NTRS)

    Brophy, John R.

    1987-01-01

    The development and testing of a computer control system for a two-engine xenon ion propulsion module is described. The computer system controls all aspects of the propulsion module operation including: start-up, steady-state operation, throttling and shutdown of the engines; start-up, operation and shutdown of the central neutralizer subsystem; control of the gimbal system for each engine; and operation of the valves in the propellant storage and distribution system. The most important engine control algorithms are described in detail. These control algorithms provide flexibility in the operation and throttling of ion engines which has never before been possible. This flexibility is made possible in large part through the use of flow controllers which maintain the total flow rate of propellant into the engine at the proper level. Data demonstrating the throttle capabilities of the engine and control system are presented.

  7. System analysis for the Huntsville Operational Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Mauldin, J.

    1984-01-01

    The Huntsville Operations Support Center (HOSC) is a distributed computer system used to provide real time data acquisition, analysis and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.

  8. System Analysis for the Huntsville Operation Support Center, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Massey, D.

    1985-01-01

    HOSC as a distributed computing system, is responsible for data acquisition and analysis during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.

  9. Building a computer-aided design capability using a standard time share operating system

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1975-01-01

    The paper describes how an integrated system of engineering computer programs can be built using a standard commercially available operating system. The discussion opens with an outline of the auxiliary functions that an operating system can perform for a team of engineers involved in a large and complex task. An example of a specific integrated system is provided to explain how the standard operating system features can be used to organize the programs into a simple and inexpensive but effective system. Applications to an aircraft structural design study are discussed to illustrate the use of an integrated system as a flexible and efficient engineering tool. The discussion concludes with an engineer's assessment of an operating system's capabilities and desirable improvements.

  10. The Relationship between Chief Information Officer Transformational Leadership and Computing Platform Operating Systems

    ERIC Educational Resources Information Center

    Anderson, George W.

    2010-01-01

    The purpose of this study was to relate the strength of Chief Information Officer (CIO) transformational leadership behaviors to 1 of 5 computing platform operating systems (OSs) that may be selected for a firm's Enterprise Resource Planning (ERP) business system. Research shows executive leader behaviors may promote innovation through the use of…

  11. Computer-operated analytical platform for the determination of nutrients in hydroponic systems.

    PubMed

    Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier

    2014-03-15

    Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production.

  12. Experiences with computer systems in blast furnace operation control at Rautaruukki

    SciTech Connect

    Inkala, P.; Karppinen, A. . Raahe Steel Works); Seppanen, M. )

    1994-09-01

    Low energy consumption, together with high productivity and stable blast furnace operation, has been achieved at Rautaruukki's Raahe Steel Works as a result of the efficient use of computer technology in process control and improvements in raw materials quality. The blast furnace supervision system is designed to support the decision-making in medium and long-term process control. The information presenting the blast furnace operation phenomena is grouped so that little time is needed to obtain the current state of the process. Due to the complexity of the blast furnace process, an expert system to guide and diagnose the short and medium-term blast furnace operation has been developed.

  13. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  14. Man/terminal interaction evaluation of computer operating system command and control service concepts. [in Spacelab

    NASA Technical Reports Server (NTRS)

    Dodson, D. W.; Shields, N. L., Jr.

    1978-01-01

    The Experiment Computer Operating System (ECOS) of the Spacelab will allow the onboard Payload Specialist to command experiment devices and display information relative to the performance of experiments. Three candidate ECOS command and control service concepts were reviewed and laboratory data on operator performance was taken for each concept. The command and control service concepts evaluated included a dedicated operator's menu display from which all command inputs were issued, a dedicated command key concept with which command inputs could be issued from any display, and a multi-display concept in which command inputs were issued from several dedicated function displays. Advantages and disadvantages are discussed in terms of training, operational errors, task performance time, and subjective comments of system operators.

  15. A COMPUTER-ASSIST MATERIAL TRACKING SYSTEM AS A CRITICALITY SAFETY AID TO OPERATORS

    SciTech Connect

    Claybourn, R V; Huang, S T

    2007-03-30

    In today's compliant-driven environment, fissionable material handlers are inundated with work control rules and procedures in carrying out nuclear operations. Historically, human errors are one of the key contributors of various criticality accidents. Since moving and handling fissionable materials are key components of their job functions, any means that can be provided to assist operators in facilitating fissionable material moves will help improve operational efficiency and enhance criticality safety implementation. From the criticality safety perspective, operational issues have been encountered in Lawrence Livermore National Laboratory (LLNL) plutonium operations. Those issues included lack of adequate historical record keeping for the fissionable material stored in containers, a need for a better way of accommodating operations in a research and development setting, and better means of helping material handlers in carrying out various criticality safety controls. Through the years, effective means were implemented including better work control process, standardized criticality control conditions (SCCC) and relocation of criticality safety engineers to the plutonium facility. Another important measure taken was to develop a computer data acquisition system for criticality safety assessment, which is the subject of this paper. The purpose of the Criticality Special Support System (CSSS) is to integrate many of the proven operational support protocols into a software system to assist operators with assessing compliance to procedures during the handling and movement of fissionable materials. Many nuclear facilities utilize mass cards or a computer program to track fissionable material mass data in operations. Additional item specific data such as, the presence of moderators or close fitting reflectors, could be helpful to fissionable material handlers in assessing compliance to SCCC's. Computer-assist checking of a workstation material inventory against the

  16. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Operations concept report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  17. A novel resource management method of providing operating system as a service for mobile transparent computing.

    PubMed

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  18. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    PubMed Central

    Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353

  19. Study of the operation and maintenance of computer systems to meet the requirements of 10 CFR 73. 55

    SciTech Connect

    Lewis, J.R.; Byers, K.R.; Fluckiger, J.D.; McBride, K.C.

    1986-01-01

    The Pacific Northwest Laboratory has studied the operation and maintenance of computer-managed systems that can help nuclear power plant licensees to meet the physical security requirements of 10 CFR 73.55 (for access control, alarm monitoring, and alarm recording). This report of that study describes a computer system quality assurance program that is based on a system of related internal controls. A discussion of computer system evaluation includes verification and validation mechanisms for assuring that requirements are stated and that the product fulfills these requirements. Finally, the report describes operator and security awareness training and a computer system preventive maintenance program. 24 refs.

  20. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    DOEpatents

    Tomkins, James L.; Camp, William J.

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  1. System analysis for the Huntsville Operational Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, E. M.

    1983-01-01

    A simulation model was developed and programmed in three languages BASIC, PASCAL, and SLAM. Two of the programs are included in this report, the BASIC and the PASCAL language programs. SLAM is not supported by NASA/MSFC facilities and hence was not included. The statistical comparison of simulations of the same HOSC system configurations are in good agreement and are in agreement with the operational statistics of HOSC that were obtained. Three variations of the most recent HOSC configuration was run and some conclusions drawn as to the system performance under these variations.

  2. The Development of a Computer-Directed Training Subsystem and Computer Operator Training Material for the Air Force Phase II Base Level System. Final Report.

    ERIC Educational Resources Information Center

    System Development Corp., Santa Monica, CA.

    The design, development, and evaluation of an integrated Computer-Directed Training Subsystem (CDTS) for the Air Force Phase II Base Level System is described in this report. The development and evaluation of a course to train computer operators of the Air Force Phase II Base Level System under CDTS control is also described. Detailed test results…

  3. Common data buffer system. [communication with computational equipment utilized in spacecraft operations

    NASA Technical Reports Server (NTRS)

    Byrne, F. (Inventor)

    1981-01-01

    A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.

  4. Using Computer Technology in the Automation of Clinical and Operating Systems in Emergency Medicine

    PubMed Central

    Guarisco, Joseph S.

    2001-01-01

    The practical application of Emergency Medicine throughout the country has historically been viewed by healthcare workers and patients as one of inefficiency and chaos. Believing that the practice of Emergency Medicine was, to the contrary, predictable, we at Ochsner felt that tremendous improvements in efficiency could be won if the vast amount of data generated in our experience of nearly 40,000 Emergency Department visits per year could be harvested. Such improvements would require the employment of computer technology and powerful database management systems. By applying these tools to profile the practice of Emergency Medicine in our institution, we were able to harvest important clinical and operational information that was ultimately used to improve department efficiency and productivity. The ability to analyze data and manage processes within the Emergency Department allowed us to target resources much more efficiently, significantly reducing nonproductive work. The collected data were sorted and filtered by a host of variables creating the ability to profile subsets of our practice—most importantly, physician practice habits and performance. Furthermore, the development of “patient tracking” software allowed us to update, view, and trend data in real-time and tweak clinical and operational processes simultaneously. The data-driven, analytical approach to the management of the Emergency Department has yielded significant improvements in service to our patients and lower operational costs. PMID:21765721

  5. Implementation of a Real-Time, Distributed Operating System for a Multiple Computer System.

    DTIC Science & Technology

    1982-06-01

    Segmentation Registers ------ 31 4. The iSBC8O86/12A( Single Board Computer ) ------ 35 C. PHYSICAL ADDRESS GENERATION ---------------------- 36 1. General...iSBC86/12A single board computer (SBC). It is based on the INTEL 8086 16 bit micro-processor. Detailed descriptions of all the components of the SBC and...space. The extra segment register (ES) is typically used for external or shared data, and data storage. 4. The iSBC86/12A( Single Board Computer ) The

  6. Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment

    ERIC Educational Resources Information Center

    He, Aiguo

    2011-01-01

    Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…

  7. YASS: A System Simulator for Operating System and Computer Architecture Teaching and Learning

    ERIC Educational Resources Information Center

    Mustafa, Besim

    2013-01-01

    A highly interactive, integrated and multi-level simulator has been developed specifically to support both the teachers and the learners of modern computer technologies at undergraduate level. The simulator provides a highly visual and user configurable environment with many pedagogical features aimed at facilitating deep understanding of concepts…

  8. Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.

    2000-01-01

    A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.

  9. Barcodes in a Medical Office Computer System: Experience with Eight Million Data Entry Operations

    PubMed Central

    Willard, Oliver T.

    1985-01-01

    A medical office management software package has been developed which utilizes barcodes to enhance data entry. The system has been in use in our practice since 1982. Currently, there are over twenty-five installations of this system with a combined experience of some eight million data entry operations using barcodes. The barcode system design and our experience with it is described.

  10. A Computer-Based, Student-Operated Advising System for Education Majors.

    ERIC Educational Resources Information Center

    Milheim, William D.; And Others

    1989-01-01

    Kent State University's College of Education implemented a computer-controlled advising system for undergraduate education students, providing information including program descriptions, deadlines and applications, student teaching, and other topics. Preliminary evaluation shows the system able to answer most-asked questions. (MSE)

  11. Digital computer operation of a nuclear reactor

    DOEpatents

    Colley, R.W.

    1982-06-29

    A method is described for the safe operation of a complex system such as a nuclear reactor using a digital computer. The computer is supplied with a data base containing a list of the safe state of the reactor and a list of operating instructions for achieving a safe state when the actual state of the reactor does not correspond to a listed safe state, the computer selects operating instructions to return the reactor to a safe state.

  12. Digital computer operation of a nuclear reactor

    DOEpatents

    Colley, Robert W.

    1984-01-01

    A method is described for the safe operation of a complex system such as a nuclear reactor using a digital computer. The computer is supplied with a data base containing a list of the safe state of the reactor and a list of operating instructions for achieving a safe state when the actual state of the reactor does not correspond to a listed safe state, the computer selects operating instructions to return the reactor to a safe state.

  13. Whenever You Use a Computer You Are Using a Program Called an Operating System.

    ERIC Educational Resources Information Center

    Cook, Rick

    1984-01-01

    Examines design, features, and shortcomings of eight disk-based operating systems designed for general use that are popular or most likely to affect the future of microcomputing. Included are the CP/M family, MS-DOS, Apple DOS/ProDOS, Unix, Pick, the p-System, TRSDOS, and Macintosh/Lisa. (MBR)

  14. Computer systems

    NASA Technical Reports Server (NTRS)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  15. Towards an integral computer environment supporting system operations analysis and conceptual design

    NASA Technical Reports Server (NTRS)

    Barro, E.; Delbufalo, A.; Rossi, F.

    1994-01-01

    VITROCISET has in house developed a prototype tool named System Dynamic Analysis Environment (SDAE) to support system engineering activities in the initial definition phase of a complex space system. The SDAE goal is to provide powerful means for the definition, analysis, and trade-off of operations and design concepts for the space and ground elements involved in a mission. For this purpose SDAE implements a dedicated modeling methodology based on the integration of different modern (static and dynamic) analysis and simulation techniques. The resulting 'system model' is capable of representing all the operational, functional, and behavioral aspects of the system elements which are part of a mission. The execution of customized model simulations enables: the validation of selected concepts with respect to mission requirements; the in-depth investigation of mission specific operational and/or architectural aspects; and the early assessment of performances required by the system elements to cope with mission constraints and objectives. Due to its characteristics, SDAE is particularly tailored for nonconventional or highly complex systems, which require a great analysis effort in their early definition stages. SDAE runs under PC-Windows and is currently used by VITROCISET system engineering group. This paper describes the SDAE main features, showing some tool output examples.

  16. Evaluation Results Report for Next Generation Computer Resources Operating Systems Interface Baseline Selection by Next Generation Computer Resources (NGCR) Operating Systems Standards Working Group (SSWG)

    DTIC Science & Technology

    1990-05-07

    Code AIR-5466 (Barry Corson ) OP-22 1 Washington, DC 20361 OP-35 1 OP-55 1 Commander 30 OP-94 1 Technical Director 1 OP-95 1 Naval Air Systems Command...MD 20899 SSC/XPT Gunter AFB Attn: Elizabeth Crouse 1 Advanced System Technologies, Inc. AL 36114 Attn: Gary J. Wright 5113 Leesburg Pike, Suite 514

  17. Using the transportable, computer-operated, liquid-scintillator fast-neutron spectrometer system

    SciTech Connect

    Thorngate, J.H.

    1988-11-01

    When a detailed energy spectrum is needed for radiation-protection measurements from approximately 1 MeV up to several tens of MeV, organic-liquid scintillators make good neutron spectrometers. However, such a spectrometer requires a sophisticated electronics system and a computer to reduce the spectrum from the recorded data. Recently, we added a Nuclear Instrument Module (NIM) multichannel analyzer and a lap-top computer to the NIM electronics we have used for several years. The result is a transportable fast-neutron spectrometer system. The computer was programmed to guide the user through setting up the system, calibrating the spectrometer, measuring the spectrum, and reducing the data. Measurements can be made over three energy ranges, 0.6--2 MeV, 1.1--8 MeV, or 1.6--16 MeV, with the spectrum presented in 0.1-MeV increments. Results can be stored on a disk, presented in a table, and shown in graphical form. 5 refs., 51 figs.

  18. Operator Station Design System - A computer aided design approach to work station layout

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.

    1979-01-01

    The Operator Station Design System is resident in NASA's Johnson Space Center Spacecraft Design Division Performance Laboratory. It includes stand-alone minicomputer hardware and Panel Layout Automated Interactive Design and Crew Station Assessment of Reach software. The data base consists of the Shuttle Transportation System Orbiter Crew Compartment (in part), the Orbiter payload bay and remote manipulator (in part), and various anthropometric populations. The system is utilized to provide panel layouts, assess reach and vision, determine interference and fit problems early in the design phase, study design applications as a function of anthropometric and mission requirements, and to accomplish conceptual design to support advanced study efforts.

  19. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  20. A Prototype System for a Computer-Based Statewide Film Library Network: A Model for Operation. Statewide Film Library Network: System-1 Specifications - Files.

    ERIC Educational Resources Information Center

    Sullivan, Todd

    Using an IBM System/360 Model 50 computer, the New York Statewide Film Library Network schedules film use, reports on materials handling and statistics, and provides for interlibrary loan of films. Communications between the film libraries and the computer are maintained by Teletype model 33 ASR Teletypewriter terminals operating on TWX…

  1. Computer controlled antenna system

    NASA Technical Reports Server (NTRS)

    Raumann, N. A.

    1972-01-01

    The application of small computers using digital techniques for operating the servo and control system of large antennas is discussed. The advantages of the system are described. The techniques were evaluated with a forty foot antenna and the Sigma V computer. Programs have been completed which drive the antenna directly without the need for a servo amplifier, antenna position programmer or a scan generator.

  2. Design of an air traffic computer simulation system to support investigation of civil tiltrotor aircraft operations

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1993-01-01

    The TATSS Project's goal was to develop a design for computer software that would support the attainment of the following objectives for the air traffic simulation model: (1) Full freedom of movement for each aircraft object in the simulation model. Each aircraft object may follow any designated flight plan or flight path necessary as required by the experiment under consideration. (2) Object position precision up to +/- 3 meters vertically and +/- 15 meters horizontally. (3) Aircraft maneuvering in three space with the object position precision identified above. (4) Air traffic control operations and procedures. (5) Radar, communication, navaid, and landing aid performance. (6) Weather. (7) Ground obstructions and terrain. (8) Detection and recording of separation violations. (9) Measures of performance including deviations from flight plans, air space violations, air traffic control messages per aircraft, and traditional temporal based measures.

  3. Operations management system

    NASA Technical Reports Server (NTRS)

    Brandli, A. E.; Eckelkamp, R. E.; Kelly, C. M.; Mccandless, W.; Rue, D. L.

    1990-01-01

    The objective of an operations management system is to provide an orderly and efficient method to operate and maintain aerospace vehicles. Concepts are described for an operations management system and the key technologies are highlighted which will be required if this capability is brought to fruition. Without this automation and decision aiding capability, the growing complexity of avionics will result in an unmanageable workload for the operator, ultimately threatening mission success or survivability of the aircraft or space system. The key technologies include expert system application to operational tasks such as replanning, equipment diagnostics and checkout, global system management, and advanced man machine interfaces. The economical development of operations management systems, which are largely software, will require advancements in other technological areas such as software engineering and computer hardware.

  4. Computational complexity issues in operative diagnosis of graph-based systems

    SciTech Connect

    Rao, N.S.V. )

    1993-04-01

    Systems that can be modeled as graphs, such that nodes represent the components and the edges represent the fault propagation between the components, are considered. Some components are equipped with alarms that ring in response to faulty conditions. In these systems, two types of problems are studied: (a) fault diagnosis, and (b) alarm placement. The fault diagnosis problems deal with computing the set of all potential failure sources that correspond to a set of ringing alarms A[sub R]. First, the single faults, where exactly one component can become faulty at any time, are considered. Systems are classified into zero-time and nonzero-time systems based on fault propagation times, and the latter is further classified based on the knowledge of propagation times. For each of these classes algorithms are presented for single fault diagnosis. The problem of detecting multiple faults is shown to be NP-complete. An alarm placement problem, that requires a single fault to be uniquely diagnosed, is examined; various versions of this problem are shown to be NP-complete. The single fault diagnosis algorithms have been implemented and tested.

  5. Kerman Photovoltaic Power Plant R&D data collection computer system operations and maintenance

    SciTech Connect

    Rosen, P.B.

    1994-06-01

    The Supervisory Control and Data Acquisition (SCADA) system at the Kerman PV Plant monitors 52 analog, 44 status, 13 control, and 4 accumulator data points in real-time. A Remote Terminal Unit (RTU) polls 7 peripheral data acquisition units that are distributed throughout the plant once every second, and stores all analog, status, and accumulator points that have changed since the last scan. The R&D Computer, which is connected to the SCADA RTU via a RS-232 serial link, polls the RTU once every 5-7 seconds and records any values that have changed since the last scan. A SCADA software package called RealFlex runs on the R&D computer and stores all updated data values taken from the RTU, along with a time-stamp for each, in a historical real-time database. From this database, averages of all analog data points and snapshots of all status points are generated every 10 minutes and appended to a daily file. These files are downloaded via modem by PVUSA/Davis staff every day, and the data is placed into the PVUSA database.

  6. A web-based remote radiation treatment planning system using the remote desktop function of a computer operating system: a preliminary report.

    PubMed

    Suzuki, Keishiro; Hirasawa, Yukinori; Yaegashi, Yuji; Miyamoto, Hideki; Shirato, Hiroki

    2009-01-01

    We developed a web-based, remote radiation treatment planning system which allowed staff at an affiliated hospital to obtain support from a fully staffed central institution. Network security was based on a firewall and a virtual private network (VPN). Client computers were installed at a cancer centre, at a university hospital and at a staff home. We remotely operated the treatment planning computer using the Remote Desktop function built in to the Windows operating system. Except for the initial setup of the VPN router, no special knowledge was needed to operate the remote radiation treatment planning system. There was a time lag that seemed to depend on the volume of data traffic on the Internet, but it did not affect smooth operation. The initial cost and running cost of the system were reasonable.

  7. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  8. Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170174 computers ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170-174 computers - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  9. Computer programs: Operational and mathematical, a compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.

  10. Measurement-based analysis of error latency. [in computer operating system

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  11. Description and theory of operation of the computer by-pass system for the NASA F-8 digital fly-by-wire control system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A triplex digital flight control system was installed in a NASA F-8C airplane to provide fail operate, full authority control. The triplex digital computers and interface circuitry process the pilot commands and aircraft motion feedback parameters according to the selected control laws, and they output the surface commands as an analog signal to the servoelectronics for position control of the aircraft's power actuators. The system and theory of operation of the computer by pass and servoelectronics are described and an automated ground test for each axis is included.

  12. Managing computer-controlled operations

    NASA Technical Reports Server (NTRS)

    Plowden, J. B.

    1985-01-01

    A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.

  13. Operating Systems Standards Working Group (OSSWG) Next Generation Computer Resources (NGCR) Program First Annual Report - October 1990

    DTIC Science & Technology

    1991-04-01

    Plaza Hotel 22-26 Jan 1990 Mobile, AL _ 9th meeting: NAVSWC, White Oak, MD 6-8 Mar 1990 10th meeting: SEI, Pittsburgh, PA 17-19 Apr 1990 11th meeting...meetings, and no cost to the Navy. Hotels are suitable, but a commitment from the hotel for meeting space may be difficult to get unless attendance can be...Workshop on Operating Systems For Mission Critical Computing, which will be held September 19-20 at the Marriot in Greenbelt, Maryland. Phil, Tricia

  14. ALMA correlator computer systems

    NASA Astrophysics Data System (ADS)

    Pisano, Jim; Amestica, Rodrigo; Perez, Jesus

    2004-09-01

    We present a design for the computer systems which control, configure, and monitor the Atacama Large Millimeter Array (ALMA) correlator and process its output. Two distinct computer systems implement this functionality: a rack- mounted PC controls and monitors the correlator, and a cluster of 17 PCs process the correlator output into raw spectral results. The correlator computer systems interface to other ALMA computers via gigabit Ethernet networks utilizing CORBA and raw socket connections. ALMA Common Software provides the software infrastructure for this distributed computer environment. The control computer interfaces to the correlator via multiple CAN busses and the data processing computer cluster interfaces to the correlator via sixteen dedicated high speed data ports. An independent array-wide hardware timing bus connects to the computer systems and the correlator hardware ensuring synchronous behavior and imposing hard deadlines on the control and data processor computers. An aggregate correlator output of 1 gigabyte per second with 16 millisecond periods and computational data rates of approximately 1 billion floating point operations per second define other hard deadlines for the data processing computer cluster.

  15. Subtlenoise: sonification of distributed computing operations

    NASA Astrophysics Data System (ADS)

    Love, P. A.

    2015-12-01

    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.

  16. An Operational System for Subject Switching between Controlled Vocabularies: A Computational Linguistics Approach.

    ERIC Educational Resources Information Center

    Silvester, June P.; And Others

    This report describes a new automated process that pioneers full-scale operational use of subject switching by the NASA (National Aeronautics and Space Administration) Scientific and Technical Information (STI) Facility. The subject switching process routinely translates machine-readable subject terms from one controlled vocabulary into the…

  17. Implementation and Evaluation of a Core Graphics System on a VAX 11/780 Computer with a UNIX Operating System.

    DTIC Science & Technology

    1983-12-01

    System ;...: #jlfi I Thesis AFIT/MACG/83D-8 John W. Taylor !Capt USAF .1. 84 02$2 6 DTIC with A I UNIERigSIYs(TCm Wr~AiTMgt-Paohn W Toc asylo I. 2 2 6 F... Thesis AFIT/Mi3D-8 John W. Taylor Capt US .~I DTIC SELECTE MARI 1984 Approved for public release distribution unlimited 4 .b...Operating System Thesis Presented to the Faculty of the School of Mathematics of the Air Force Institute of Technology Air University in Partial Fulfillment

  18. Operation and Service Manual for Computer System Test Console 52E270003. Volume I.

    DTIC Science & Technology

    portion of the Inertial Guidance System in the Gemini Spacecraft during system and pre-launch testing. The TCCS is manufactured by the International ... Business Machines (IBM) Corporation of Rockville, Maryland for the McDonnell Company of St. Louis, Missouri. The report presents a description of the TCCS

  19. Pyrolaser Operating System

    NASA Technical Reports Server (NTRS)

    Roberts, Floyd E., III

    1994-01-01

    Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.

  20. An Intelligent computer-aided tutoring system for diagnosing anomalies of spacecraft in operation

    NASA Technical Reports Server (NTRS)

    Rolincik, Mark; Lauriente, Michael; Koons, Harry C.; Gorney, David

    1993-01-01

    A new rule-based, expert system for diagnosing spacecraft anomalies is under development. The knowledge base consists of over two-hundred (200) rules and provides links to historical and environmental databases. Environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose. The system's driver translates forward chaining rules into a backward chaining sequence, prompting the user for information pertinent to the causes considered. When the user selects the novice mode, the system automatically gives detailed explanations and descriptions of terms and reasoning as the session progresses, in a sense teaching the user. As such it is an effective tutoring tool. The use of heuristics frees the user from searching through large amounts of irrelevant information and allows the user to input partial information (varying degrees of confidence in an answer) or 'unknown' to any question. The system is available on-line and uses C Language Integrated Production System (CLIPS), an expert shell developed by the NASA Johnson Space Center AI Laboratory in Houston.

  1. Adaptable structural synthesis using advanced analysis and optimization coupled by a computer operating system

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1979-01-01

    A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.

  2. Evaluation Process Report for Next Generation Computer Resources Operating Systems Interface Baseline Selection

    DTIC Science & Technology

    1990-05-07

    Command Attn: Codes OP-04 1 Attn: Code AIR-5466 (Barry Corson ) OP-22 I Washington, DC 20361 OP-35 I OP-55 1 Commander 30 OP-94 I Technical Director 1...Attn: Gary Fisher 1 Jim Hall 1 USAF Gaithersburg, MD 20899 SSC/XPT Gunter AFB Attn: Elizabeth Crouse Advanced System Technologies, Inc. AL 36114 Attn

  3. Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks.

    PubMed

    Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P; Gerstein, Mark

    2010-05-18

    The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers' continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems.

  4. Information Technology Management: Defense Information Systems Agency Controls of the Center for Computing Services Placed in Operation and Tests of Operating Effectiveness for the Period December 1, 2005, through July 31, 2006

    DTIC Science & Technology

    2006-11-15

    F Information Technology Management Department of Defense Office of Inspector General November 15, 2006 AccountabilityIntegrityQuality Defense... Information Technology Management : Defense Information Systems Agency Controls of the Center for Computing Services Placed in Operation and Tests of

  5. GT-MSOCC - A domain for research on human-computer interaction and decision aiding in supervisory control systems. [Georgia Tech - Multisatellite Operations Control Center

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1987-01-01

    The Georgia Tech-Multisatellite Operations Control Center (GT-MSOCC), a real-time interactive simulation of the operator interface to a NASA ground control system for unmanned earth-orbiting satellites, is described. The GT-MSOCC program for investigating a range of modeling, decision aiding, and workstation design issues related to the human-computer interaction is discussed. A GT-MSOCC operator function model is described in which operator actions, both cognitive and manual, are represented as the lowest level discrete control network nodes, and operator action nodes are linked to information needs or system reconfiguration commands.

  6. An operational system for subject switching between controlled vocabularies: A computational linguistics approach

    NASA Technical Reports Server (NTRS)

    Silvester, J. P.; Newton, R.; Klingbiel, P. H.

    1984-01-01

    The NASA Lexical Dictionary (NLD), a system that automatically translates input subject terms to those of NASA, was developed in four phases. Phase One provided Phrase Matching, a context sensitive word-matching process that matches input phrase words with any NASA Thesaurus posting (i.e., index) term or Use reference. Other Use references have been added to enable the matching of synonyms, variant spellings, and some words with the same root. Phase Two provided the capability of translating any individual DTIC term to one or more NASA terms having the same meaning. Phase Three provided NASA terms having equivalent concepts for two or more DTIC terms, i.e., coordinations of DTIC terms. Phase Four was concerned with indexer feedback and maintenance. Although the original NLD construction involved much manual data entry, ways were found to automate nearly all but the intellectual decision-making processes. In addition to finding improved ways to construct a lexical dictionary, applications for the NLD have been found and are being developed.

  7. An Interactive System of Computer Generated Graphic Displays for Motivating Meaningful Learning of Matrix Operations and Concepts of Matrix Algebra

    DTIC Science & Technology

    1990-09-01

    autonomous discovery learning, where the learner identifies and selects the information to be learned(l:7). Figure I shows how the two continua form a...using computer generatcd graphics; and a fortuitous consequence of selecting matrix operations was that the operations form a family of concepts that...blinking prompts, and some animation. Although an extensive organon exists about the visual aspects of information display, this study relied solely on

  8. Microrover Operates With Minimal Computation

    NASA Technical Reports Server (NTRS)

    Miller, David P.; Loch, John L.; Gat, Erann; Desai, Rajiv S.; Angle, Colin; Bickler, Donald B.

    1992-01-01

    Small, light, highly mobile robotic vehicles called "microrovers" use sensors and artificial intelligence to perform complicated tasks autonomously. Vehicle navigates, avoids obstacles, and picks up objects using reactive control scheme selected from among few preprogrammed behaviors to respond to environment while executing assigned task. Under development for exploration and mining of other planets. Also useful in firefighting, cleaning up chemical spills, and delivering materials in factories. Reactive control scheme and principle of behavior-description language useful in reducing computational loads in prosthetic limbs and automotive collision-avoidance systems.

  9. Broadcasting collective operation contributions throughout a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-02-21

    Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

  10. Future Operating Concept - Joint Computer Network Operations

    DTIC Science & Technology

    2010-02-17

    of explosive vests in a central market, a beheading captured in streaming video, precise cyber/space/ missile strikes, or Know the enemy and know...cross-theater campaigns. 18 USSTRATCOM integrates space, global strike, ISR, network warfare, and missile defense into functional commands 63...These operations are ―highly dynamic and maneuverable with transitions between F2T2EA phases nearly instantaneously.‖ Integrating effects based

  11. Computer control for remote wind turbine operation

    SciTech Connect

    Manwell, J.F.; Rogers, A.L.; Abdulwahid, U.; Driscoll, J.

    1997-12-31

    Light weight wind turbines located in harsh, remote sites require particularly capable controllers. Based on extensive operation of the original ESI-807 moved to such a location, a much more sophisticated controller than the original one has been developed. This paper describes the design, development and testing of that new controller. The complete control and monitoring system consists of sensor and control inputs, the control computer, control outputs, and additional equipment. The control code was written in Microsoft Visual Basic on a PC type computer. The control code monitors potential faults and allows the turbine to operate in one of eight states: off, start, run, freewheel, low wind shut down, normal wind shutdown, emergency shutdown, and blade parking. The controller also incorporates two {open_quotes}virtual wind turbines,{close_quotes} including a dynamic model of the machine, for code testing. The controller can handle numerous situations for which the original controller was unequipped.

  12. Computer System Maintenance and Enhancements

    DTIC Science & Technology

    1989-02-23

    Modular Computer Systems Monitor Monitor Computer MVS IBM’s Multiple Virtual Operating System PCAL Pressure CALibration PLC Programmable Logic Controller PLCI... Programmable Logic Controller #1 PLC2 Programmable Logic Controller #2 POTX Propulsion Technology Preston Analog to digital signal converter

  13. IMES-Ural: the system of the computer programs for operational analysis of power flow distribution using telemetric data

    SciTech Connect

    Bogdanov, V.A.; Bol'shchikov, A.A.; Zifferman, E.O.

    1981-02-01

    A system of computer programs was described which enabled the user to perform real-time calculation and analysis of the current flow in the 500 kV network of the Ural Regional Electric Power Plant for all possible variations of the network, based on teleinformation and correctable equivalent parameters of the 220 to 110 kV network.

  14. [STATE OF AUTONOMIC NERVOUS SYSTEM OF MEDICAL HIGHER SCHOOL STUDENTS AND ITS RELATIONS WITH THEIR PHYSICAL ACTIVITY AND OPERATION WITH THE COMPUTER].

    PubMed

    Korovina, L D; Zaporozhets, T M

    2015-01-01

    The state of autonomic nervous system of the first-second years students and its relations with their physical activity, the experience and operation duration with computers have been explored. The observed base vegetative tonus was vagotonia and eytonia mainly (93.2%), vegetative reactivity was asympathetic and sympathetic (77.0%) mainly. Autonomic nervous system sympathetic part excitability in orthostatic test in 61.1% of the students was norm, while parasympathetic part reactivity in oculocardiac test was norm or reduced (72.8% of students) mainly. Work with the computer during 3 hours and more a day enhances sympathetic influences on heart activity, but increase of total work time promotes to relative decrease of sympatetic tonus. Vegetative reactivity grows both with increase of the operating time with the computer and with increase of the sport exercises. duration.

  15. Computer Center: CIBE Systems.

    ERIC Educational Resources Information Center

    Crovello, Theodore J.

    1982-01-01

    Differentiates between computer systems and Computers in Biological Education (CIBE) systems (computer system intended for use in biological education). Describes several CIBE stand alone systems: single-user microcomputer; single-user microcomputer/video-disc; multiuser microcomputers; multiuser maxicomputer; and local and long distance computer…

  16. The embedded operating system project

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1984-01-01

    This progress report describes research towards the design and construction of embedded operating systems for real-time advanced aerospace applications. The applications concerned require reliable operating system support that must accommodate networks of computers. The report addresses the problems of constructing such operating systems, the communications media, reconfiguration, consistency and recovery in a distributed system, and the issues of realtime processing. A discussion is included on suitable theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based systems. In particular, this report addresses: atomic actions, fault tolerance, operating system structure, program development, reliability and availability, and networking issues. This document reports the status of various experiments designed and conducted to investigate embedded operating system design issues.

  17. Advanced Operating System Technologies

    NASA Astrophysics Data System (ADS)

    Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro

    In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC

  18. Computational Linguistics in Military Operations

    DTIC Science & Technology

    2010-01-01

    information dominance at the operational and tactical level of war in future warfare. Discussion: Mastering culture and language in a foreign country is decisive to understand the operational environment. In addition, the ability to understand and speak a foreign language is a prerequisite to achieve truly comprehension of an unfamiliar culture. Lasting operations in Afghanistan and Iraq and the necessity to breach the language gap lead to progress in the field of Machine Translation and the development of technical solutions to close the gap in the past decade. This paper

  19. Planning Systems for Distributed Operations

    NASA Technical Reports Server (NTRS)

    Maxwell, Theresa G.

    2002-01-01

    This viewgraph representation presents an overview of the mission planning process involving distributed operations (such as the International Space Station (ISS)) and the computer hardware and software systems needed to support such an effort. Topics considered include: evolution of distributed planning systems, ISS distributed planning, the Payload Planning System (PPS), future developments in distributed planning systems, Request Oriented Scheduling Engine (ROSE) and Next Generation distributed planning systems.

  20. Implications of Using Computer-Based Training with the AN/SQQ-89(v) Sonar System: Operating and Support Costs

    DTIC Science & Technology

    2012-06-01

    Defense Science Board ECR Electronic Classroom ERNT Executive Review of Navy Training ETS Engineering and Technical Services EXCEL Excellence...Delivery Systems for Web-Based Technology In A schools, CBT is conducted in an electronic classroom ( ECR ) environment. The ECR consists of several... ECRs . The average age of the computers was approximately 6 years (Naval Inspector General, 21 2009, p. 5). The IG group found that most ECRs

  1. Computer Security for the Computer Systems Manager.

    DTIC Science & Technology

    1982-12-01

    concern of computer security is the auditing of the system in both the normal and standby nodes of operation (Ref. 2: p. 21. Risk manaqement Is the...planning and auditing will be treated in Chapter six. B. COST EFFECTIVENESS DETERMIN&TION As d’cussed before, the third part of risk analysis is the...to physical security and depend upon some of the following considerations: * physical location * availability of fire and law enforcement services

  2. The embedded operating system project

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1985-01-01

    The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.

  3. Payload operation television system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The Payload Operation Television System is a high performance closed-circuit TV system designed to determine the feasibility of using TV to augment purely visual monitoring of operations, and to establish optimum system design of an operating unit which can ultimately be used to assist the operator of a remotely manipulated space-borne cargo loading device. The TV system assembled on this program is intended for laboratory experimentation which would develop operational techniques and lead to the design of space-borne TV equipment whose purpose would be to assist the astronaut-operator aboard a space station to load payload components. The equipment consists principally of a good quality TV camera capable of high resolving power; a TV monitor; a sync generator for driving camera and monitor; and two pan/tilt units which are remotely controlled by the operator.

  4. Operator Performance Support System (OPSS)

    NASA Technical Reports Server (NTRS)

    Conklin, Marlen Z.

    1993-01-01

    In the complex and fast reaction world of military operations, present technologies, combined with tactical situations, have flooded the operator with assorted information that he is expected to process instantly. As technologies progress, this flow of data and information have both guided and overwhelmed the operator. However, the technologies that have confounded many operators today can be used to assist him -- thus the Operator Performance Support Team. In this paper we propose an operator support station that incorporates the elements of Video and Image Databases, productivity Software, Interactive Computer Based Training, Hypertext/Hypermedia Databases, Expert Programs, and Human Factors Engineering. The Operator Performance Support System will provide the operator with an integrating on-line information/knowledge system that will guide expert or novice to correct systems operations. Although the OPSS is being developed for the Navy, the performance of the workforce in today's competitive industry is of major concern. The concepts presented in this paper which address ASW systems software design issues are also directly applicable to industry. the OPSS will propose practical applications in how to more closely align the relationships between technical knowledge and equipment operator performance.

  5. Space station operating system study

    NASA Technical Reports Server (NTRS)

    Horn, Albert E.; Harwell, Morris C.

    1988-01-01

    The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.

  6. Development of a novel computational tool for optimizing the operation of fuel cells systems: Application for phosphoric acid fuel cells

    NASA Astrophysics Data System (ADS)

    Zervas, P. L.; Tatsis, A.; Sarimveis, H.; Markatos, N. C. G.

    Fuel cells offer a significant and promising clean technology for portable, automotive and stationary applications and, thus, optimization of their performance is of particular interest. In this study, a novel optimization tool is developed that realistically describes and optimizes the performance of fuel cell systems. First, a 3D steady-state detailed model is produced based on computational fluid dynamics (CFD) techniques. Simulated results obtained from the CFD model are used in a second step, to generate a database that contains the fuel and oxidant volumetric rates and utilizations and the corresponding cell voltages. In the third step mathematical relationships are developed between the input and output variables, using the database that has been generated in the previous step. In particular, the linear regression methodology and the radial basis function (RBF) neural network architecture are utilized for producing the input-output "meta-models". Several statistical tests are used to validate the proposed models. Finally, a multi-objective hierarchical Non-Linear Programming (NLP) problem is formulated that takes into account the constraints and limitations of the system. The multi-objective hierarchical approach is built upon two steps: first, the fuel volumetric rate is minimized, recognizing the fact that our first concern is to reduce consumption of the expensive fuel. In the second step, optimization is performed with respect to the oxidant volumetric rate. The proposed method is illustrated through its application for phosphoric acid fuel cell (PAFC) systems.

  7. Human operator identification model and related computer programs

    NASA Technical Reports Server (NTRS)

    Kessler, K. M.; Mohr, J. N.

    1978-01-01

    Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.

  8. Computations involving differential operators and their actions on functions

    NASA Technical Reports Server (NTRS)

    Crouch, Peter E.; Grossman, Robert; Larson, Richard

    1991-01-01

    The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.

  9. Expert-System Consultant To Operating Personnel

    NASA Technical Reports Server (NTRS)

    Heard, Astrid E.; Pinkowski, Patrick P.; Adler, Richard M.; Hosken, R. Bruce

    1992-01-01

    Artificial intelligence aids engineers and technicians in controlling and monitoring complicated systems. Operations Analyst for Distributed Systems (OPERA) software is developmental suite of expert-system computer programs helping engineers and technicians operating from number of computer workstations to control and monitor spacecraft during prelaunch and launch phases of operation. OPERA designed to serve as consultant to operating engineers and technicians. It preprocesses incoming data, using expertise collected from conglomerate of specialists in design and operation of various parts of system. Driven by menus and mouse-activated commands. Modified versions of OPERA used in chemical-processing plants, factories, banks, and other enterprises in which there are distributed-computer systems including computers that monitor or control other computers.

  10. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  11. Artificial intelligence program in a computer application supporting reactor operations

    SciTech Connect

    Stratton, R.C.; Town, G.G.

    1985-01-01

    Improving nuclear reactor power plant operability is an ever-present concern for the nuclear industry. The definition of plant operability involves a complex interaction of the ideas of reliability, safety, and efficiency. This paper presents observations concerning the issues involved and the benefits derived from the implementation of a computer application which combines traditional computer applications with artificial intelligence (AI) methodologies. A system, the Component Configuration Control System (CCCS), is being installed to support nuclear reactor operations at the Experimental Breeder Reactor II.

  12. Implementation of NASTRAN on the IBM/370 CMS operating system

    NASA Technical Reports Server (NTRS)

    Britten, S. S.; Schumacker, B.

    1980-01-01

    The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.

  13. Computing Fourier integral operators with caustics

    NASA Astrophysics Data System (ADS)

    Caday, Peter

    2016-12-01

    Fourier integral operators (FIOs) have widespread applications in imaging, inverse problems, and PDEs. An implementation of a generic algorithm for computing FIOs associated with canonical graphs is presented, based on a recent paper of de Hoop et al. Given the canonical transformation and principal symbol of the operator, a preprocessing step reduces application of an FIO approximately to multiplications, pushforwards and forward and inverse discrete Fourier transforms, which can be computed in O({N}n+(n-1)/2{log}N) time for an n-dimensional FIO. The same preprocessed data also allows computation of the inverse and transpose of the FIO, with identical runtime. Examples demonstrate the algorithm’s output, and easily extendible MATLAB/C++ source code is available from the author.

  14. Payload operation television system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The TV system assembled is intended for laboratory experimentation which would develop operational techniques and lead to the design of space-borne TV equipment whose purpose would be to assist the astronaut-operator aboard a space station to load payload components. The TV system assembled for this program is a black and white, monocular, high performance system. The equipment consists principally of a good quality TV camera capable of high resolving power; a TV monitor; a sync generator for driving camera and monitor; and two pan/tilt units which are remotely controlled by the operator. One pan/tilt unit provides control of the pointing of the camera, the other similarly controls the position of a simulated payload.

  15. Computer graphics aid mission operations. [NASA missions

    NASA Technical Reports Server (NTRS)

    Jeletic, James F.

    1990-01-01

    The application of computer graphics techniques in NASA space missions is reviewed. Telemetric monitoring of the Space Shuttle and its components is discussed, noting the use of computer graphics for real-time visualization problems in the retrieval and repair of the Solar Maximum Mission. The use of the world map display for determining a spacecraft's location above the earth and the problem of verifying the relative position and orientation of spacecraft to celestial bodies are examined. The Flight Dynamics/STS Three-dimensional Monitoring System and the Trajectroy Computations and Orbital Products System world map display are described, emphasizing Space Shuttle applications. Also, consideration is given to the development of monitoring systems such as the Shuttle Payloads Mission Monitoring System and the Attitude Heads-Up Display and the use of the NASA-Goddard Two-dimensional Graphics Monitoring System during Shuttle missions and to support the Hubble Space Telescope.

  16. Computer model for refinery operations with emphasis on jet fuel production. Volume 3: Detailed systems and programming documentation

    NASA Technical Reports Server (NTRS)

    Dunbar, D. N.; Tunnah, B. G.

    1978-01-01

    The FORTRAN computing program predicts flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuels of varying end point and hydrogen content specifications. The program has a provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case.

  17. Computer-aided dispatching system design specification

    SciTech Connect

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  18. A Prototype System for a Computer-Based Statewide Film Library Network: A Model for Operation. Number 3, Statewide Film Library Network: System Write-Up.

    ERIC Educational Resources Information Center

    Auricchio, Dominick

    An overview of materials scheduling, this write-up outlines system components, standardization, costs, limitations, and expansion capabilities of the New York Statewide Film Library Network. Interacting components include research staff; materials libraries; hardware; input/output (operation modes, input format conventions, transaction codes);…

  19. Operational computer graphics in the flight dynamics environment

    NASA Technical Reports Server (NTRS)

    Jeletic, James F.

    1989-01-01

    Over the past five years, the Flight Dynamics Division of the National Aeronautics and Space Administration's (NASA's) Goddard Space Flight Center has incorporated computer graphics technology into its operational environment. In an attempt to increase the effectiveness and productivity of the Division, computer graphics software systems have been developed that display spacecraft tracking and telemetry data in 2-d and 3-d graphic formats that are more comprehensible than the alphanumeric tables of the past. These systems vary in functionality from real-time mission monitoring system, to mission planning utilities, to system development tools. Here, the capabilities and architecture of these systems are discussed.

  20. Computational Systems Biology

    SciTech Connect

    McDermott, Jason E.; Samudrala, Ram; Bumgarner, Roger E.; Montogomery, Kristina; Ireton, Renee

    2009-05-01

    Computational systems biology is the term that we use to describe computational methods to identify, infer, model, and store relationships between the molecules, pathways, and cells (“systems”) involved in a living organism. Based on this definition, the field of computational systems biology has been in existence for some time. However, the recent confluence of high throughput methodology for biological data gathering, genome-scale sequencing and computational processing power has driven a reinvention and expansion of this field. The expansions include not only modeling of small metabolic{Ishii, 2004 #1129; Ekins, 2006 #1601; Lafaye, 2005 #1744} and signaling systems{Stevenson-Paulik, 2006 #1742; Lafaye, 2005 #1744} but also modeling of the relationships between biological components in very large systems, incluyding whole cells and organisms {Ideker, 2001 #1124; Pe'er, 2001 #1172; Pilpel, 2001 #393; Ideker, 2002 #327; Kelley, 2003 #1117; Shannon, 2003 #1116; Ideker, 2004 #1111}{Schadt, 2003 #475; Schadt, 2006 #1661}{McDermott, 2002 #878; McDermott, 2005 #1271}. Generally these models provide a general overview of one or more aspects of these systems and leave the determination of details to experimentalists focused on smaller subsystems. The promise of such approaches is that they will elucidate patterns, relationships and general features that are not evident from examining specific components or subsystems. These predictions are either interesting in and of themselves (for example, the identification of an evolutionary pattern), or are interesting and valuable to researchers working on a particular problem (for example highlight a previously unknown functional pathway). Two events have occurred to bring about the field computational systems biology to the forefront. One is the advent of high throughput methods that have generated large amounts of information about particular systems in the form of genetic studies, gene expression analyses (both protein and

  1. Computer control improves ethylene plant operation

    SciTech Connect

    Whitehead, B.D.; Parnis, M.

    1987-11-01

    ICIA Australia ordered a turnkey 250,000-tpy ethylene plant to be built at the Botany site, Sydney, Australia. Following a feasibility study, an additional order was placed for a process computer system for advanced process control and optimization. This article gives a broad outline of the process computer tasks, how the tasks were implemented, what problems were met, what lessons were learned and what results were achieved.

  2. Enabling opportunistic resources for CMS Computing Operations

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  3. Enabling opportunistic resources for CMS Computing Operations

    SciTech Connect

    Hufnagel, Dick

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  4. Enabling opportunistic resources for CMS Computing Operations

    NASA Astrophysics Data System (ADS)

    Hufnagel, D.; CMS Collaboration

    2015-12-01

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  5. Enabling opportunistic resources for CMS Computing Operations

    SciTech Connect

    Hufnagel, Dirk

    2015-12-23

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  6. Brain computer interface for operating a robot

    NASA Astrophysics Data System (ADS)

    Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed

    2013-10-01

    A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.

  7. Evaluation of the level of skill required of operators of a computer-assisted radiologic total lung capacity measurement system

    SciTech Connect

    Mazzeo, J.

    1985-01-01

    This research was conducted to obtain information regarding the feasibility of using non-medical personnel to obtain measurements of radiologic total lung capacity (TLC). Operators from each of four groups (general undergraduates, nursing students, medical students, radiologists) differing in the amount of medical training and/or experience reading x-rays, performed each of two tasks. The first task was the measurement of radiologic TLC for a set of twenty x-rays. The second task consisted of tracing the outline of the anatomical structures that must be identified in the execution of the radiologic TLC measurement task. Data from the radiologic TLC measurement task were used to identify possible group differences in the reliability and validity of the measures. The reliability analyses were performed within the framework of Generalizability Theory. While the results are not conclusive, due to small sizes, the analyses suggest that group differences in reliability of the measures, if they exist, are small.

  8. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  9. MODELS-3 INSTALLATION PROCEDURES FOR A PERSONAL COMPUTER WITH A NT OPERATING SYSTEM (MODELS-3 VERSION 4.1)

    EPA Science Inventory

    Models-3 is a flexible system designed to simplify the development and use of air quality models and other environmental decision support tools. It is designed for applications ranging from regulatory and policy analysis to understanding the complex interactions of atmospheric...

  10. Portable color multimedia training systems based on monochrome laptop computers (CBT-in-a-briefcase), with spinoff implications for video uplink and downlink in spaceflight operations

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1994-01-01

    This report describes efforts to use digital motion video compression technology to develop a highly portable device that would convert 1990-91 era IBM-compatible and/or MacIntosh notebook computers into full-color, motion-video capable multimedia training systems. An architecture was conceived that would permit direct conversion of existing laser-disk-based multimedia courses with little or no reauthoring. The project did not physically demonstrate certain critical video keying techniques, but their implementation should be feasible. This investigation of digital motion video has spawned two significant spaceflight projects at MSFC: one to downlink multiple high-quality video signals from Spacelab, and the other to uplink videoconference-quality video in realtime and high quality video off-line, plus investigate interactive, multimedia-based techniques for enhancing onboard science operations. Other airborne or spaceborne spinoffs are possible.

  11. Computer Systems Technician.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This document contains 17 units to consider for use in a tech prep competency profile for the occupation of computer systems technician. All the units listed will not necessarily apply to every situation or tech prep consortium, nor will all the competencies within each unit be appropriate. Several units appear within each specific occupation and…

  12. Pacing a data transfer operation between compute nodes on a parallel computer

    DOEpatents

    Blocksome, Michael A.

    2011-09-13

    Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

  13. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  14. Apu/hydraulic/actuator Subsystem Computer Simulation. Space Shuttle Engineering and Operation Support, Engineering Systems Analysis. [for the space shuttle

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Major developments are examined which have taken place to date in the analysis of the power and energy demands on the APU/Hydraulic/Actuator Subsystem for space shuttle during the entry-to-touchdown (not including rollout) flight regime. These developments are given in the form of two subroutines which were written for use with the Space Shuttle Functional Simulator. The first subroutine calculates the power and energy demand on each of the three hydraulic systems due to control surface (inboard/outboard elevons, rudder, speedbrake, and body flap) activity. The second subroutine incorporates the R. I. priority rate limiting logic which limits control surface deflection rates as a function of the number of failed hydraulic. Typical results of this analysis are included, and listings of the subroutines are presented in appendicies.

  15. SEASAT economic assessment. Volume 10: The SATIL 2 program (a program for the evaluation of the costs of an operational SEASAT system as a function of operational requirements and reliability. [computer programs for economic analysis and systems analysis of SEASAT satellite systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The SATIL 2 computer program was developed to assist with the programmatic evaluation of alternative approaches to establishing and maintaining a specified mix of operational sensors on spacecraft in an operational SEASAT system. The program computes the probability distributions of events (i.e., number of launch attempts, number of spacecraft purchased, etc.), annual recurring cost, and present value of recurring cost. This is accomplished for the specific task of placing a desired mix of sensors in orbit in an optimal fashion in order to satisfy a specified sensor demand function. Flow charts are shown, and printouts of the programs are given.

  16. Computational Systems Chemical Biology

    PubMed Central

    Oprea, Tudor I.; May, Elebeoba E.; Leitão, Andrei; Tropsha, Alexander

    2013-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically-based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology, SCB (Oprea et al., 2007). The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology / systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology. PMID:20838980

  17. Computational systems chemical biology.

    PubMed

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  18. Analysis of C-shaped canal systems in mandibular second molars using surgical operating microscope and cone beam computed tomography: A clinical approach

    PubMed Central

    Chhabra, Sanjay; Yadav, Seema; Talwar, Sangeeta

    2014-01-01

    Aims: The study was aimed to acquire better understanding of C-shaped canal systems in mandibular second molar teeth through a clinical approach using sophisticated techniques such as surgical operating microscope and cone beam computed tomography (CBCT). Materials and Methods: A total of 42 extracted mandibular second molar teeth with fused roots and longitudinal grooves were collected randomly from native Indian population. Pulp chamber floors of all specimens were examined under surgical operating microscope and classified into four types (Min's method). Subsequently, samples were subjected to CBCT scan after insertion of K-files size #10 or 15 into each canal orifice and evaluated using the cross-sectional and 3-dimensional images in consultation with dental radiologist so as to obtain more accurate results. Minimum distance between the external root surface on the groove and initial file placed in the canal was also measured at different levels and statistically analyzed. Results: Out of 42 teeth, maximum number of samples (15) belonged to Type-II category. A total of 100 files were inserted in 86 orifices of various types of specimens. Evaluation of the CBCT scan images of the teeth revealed that a total of 21 canals were missing completely or partially at different levels. The mean values for the minimum thickness were highest at coronal followed by middle and apical third levels in all the categories. Lowest values were obtained for teeth with Type-III category at all three levels. Conclusions: The present study revealed anatomical variations of C-shaped canal system in mandibular second molars. The prognosis of such complex canal anatomies can be improved by simultaneous employment of modern techniques such as surgical operating microscope and CBCT. PMID:24944447

  19. Technical computing system evaluations

    SciTech Connect

    Shaw, B.R.

    1987-05-01

    The acquisition of technical computing hardware and software is an extremely personal process. Although most commercial system configurations have one of several general organizations, individual requirements of the purchaser can have a large impact on successful implementation even though differences between products may seem small. To assure adequate evaluation and appropriate system selection, it is absolutely essential to establish written goals, create a real benchmark data set and testing procedure, and finally test and evaluate the system using the purchaser's technical staff, not the vendor's. BHP P(A) (formerly Monsanto Oil Company) was given the opportunity to acquire a technical computing system that would meet the needs of the geoscience community, provide future growth avenues, and maintain corporate hardware and software standards of stability and reliability. The system acquisition team consisted of a staff geologist, geophysicist, and manager of information systems. The eight-month evaluation allowed the development procedures to personalize and evaluate BHP needs as well as the vendor's products. The goal-driven benchmark process has become the standard procedure for system additions and expansions as well as product acceptance evaluations.

  20. The Remote Computer Control (RCC) system

    NASA Technical Reports Server (NTRS)

    Holmes, W.

    1980-01-01

    A system to remotely control job flow on a host computer from any touchtone telephone is briefly described. Using this system a computer programmer can submit jobs to a host computer from any touchtone telephone. In addition the system can be instructed by the user to call back when a job is finished. Because of this system every touchtone telephone becomes a conversant computer peripheral. This system known as the Remote Computer Control (RCC) system utilizes touchtone input, touchtone output, voice input, and voice output. The RCC system is microprocessor based and is currently using the INTEL 80/30microcomputer. Using the RCC system a user can submit, cancel, and check the status of jobs on a host computer. The RCC system peripherals consist of a CRT for operator control, a printer for logging all activity, mass storage for the storage of user parameters, and a PROM card for program storage.

  1. Cloud Computing for Mission Design and Operations

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Attiyah, Amy; Beswick, Robert; Gerasimantos, Dimitrios

    2012-01-01

    The space mission design and operations community already recognizes the value of cloud computing and virtualization. However, natural and valid concerns, like security, privacy, up-time, and vendor lock-in, have prevented a more widespread and expedited adoption into official workflows. In the interest of alleviating these concerns, we propose a series of guidelines for internally deploying a resource-oriented hub of data and algorithms. These guidelines provide a roadmap for implementing an architecture inspired in the cloud computing model: associative, elastic, semantical, interconnected, and adaptive. The architecture can be summarized as exposing data and algorithms as resource-oriented Web services, coordinated via messaging, and running on virtual machines; it is simple, and based on widely adopted standards, protocols, and tools. The architecture may help reduce common sources of complexity intrinsic to data-driven, collaborative interactions and, most importantly, it may provide the means for teams and agencies to evaluate the cloud computing model in their specific context, with minimal infrastructure changes, and before committing to a specific cloud services provider.

  2. Global tree network for computing structures enabling global processing operations

    DOEpatents

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  3. APS control system operating system choice

    SciTech Connect

    Knott, M.; Kraimer, M.; Lenkszus, F.

    1990-05-01

    The purpose of this document is to set down the reasons and decisions regarding what is an important choice for the APS Control System design staff, namely the choice of an operating system for its principle computer resources. Since the choice also may affect cost estimates and the design handbook, there is a further need to document the process. The descriptions and explanations which follow are intended for reading by other APS technical area managers and will contain a minimum of buzz-words, and where buzz-words are used, they will be explained. The author hopes that it will help in understanding the current trends and developments in the volatile and fast-developing computer field.

  4. Computer memory management system

    DOEpatents

    Kirk, III, Whitson John

    2002-01-01

    A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

  5. Artificial intelligence issues related to automated computing operations

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  6. The CESR computer control system

    NASA Astrophysics Data System (ADS)

    Helmke, R. G.; Rice, D. H.; Strohman, C.

    1986-06-01

    The control system for the Cornell Electron Storage Ring (CESR) has functioned satisfactorily since its implementation in 1979. Key characteristics are fast tuning response, almost exclusive use of FORTRAN as a programming language, and efficient coordinated ramping of CESR guide field elements. This original system has not, however, been able to keep pace with the increasing complexity of operation of CESR associated with performance upgrades. Limitations in address space, expandability, access to data system-wide, and program development impediments have prompted the undertaking of a major upgrade. The system under development accommodates up to 8 VAX computers for all applications programs. The database and communications semaphores reside in a shared multi-ported memory, and each hardware interface bus is controlled by a dedicated 32 bit micro-processor in a VME based system.

  7. Aircraft Operations Classification System

    NASA Technical Reports Server (NTRS)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  8. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  9. Computing and cognition in future power-plant operations

    SciTech Connect

    Kisner, R.A.; Sheridan, T.B.

    1983-01-01

    The intent of this paper is to speculate on the nature of future interactions between people and computers in the operation of power plants. In particular, the authors offer a taxonomy for examining the differing functions of operators in interacting with the plant and its computers, and the differing functions of the computers in interacting with the plant and its operators.

  10. Digital optical computers at the optoelectronic computing systems center

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  11. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Zbigniew; Falkowski, Paul

    1990-01-01

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

  12. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Z.; Falkowski, P.

    1990-07-17

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.

  13. Computational Aeroacoustic Analysis System Development

    NASA Technical Reports Server (NTRS)

    Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.

    2001-01-01

    Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed

  14. Open systems for plant process computers

    SciTech Connect

    Norris, D.L.; Pate, R.L.

    1995-03-01

    Arizona Public Service (APS) Company recently upgraded the Emergency Response Facility (ERF) computer at the Palo Verde Nuclear Generating Stations (PVNGS). The project was initiated to provide the ability to record and display plant data for later analysis of plant events and operational problems (one of the great oversights at nearly every nuclear plant constructed) and to resolve a commitment to correct performance problems on the display side of the system. A major forming objective for the project was to lay a foundation with ample capability and flexibility to provide solutions for future real-time data needs at the plants. The Halliburton NUS Corporation`s Idaho Center (NUS) was selected to develop the system. Because of the constant changes occurring in the computer hardware and software industry, NUS designed and implemented a distributed Open Systems solution based on the UNIX Operating System. This Open System is highly portable across a variety of computer architectures and operating systems and is based on NUS` R*TIME{reg_sign}, a mature software system successfully operating in 14 nuclear plants and over 80 fossil plants. Along with R*TIME, NUS developed two Man-Machine Interface (MMI) versions: R*TIME/WIN, a Microsoft Windows application designed for INTEL-based personal computers operating either Microsoft`s Windows 3.1 or Windows NT operating systems; and R*TIME/X, based on the standard X Window System utilizing the Motif Window Manager.

  15. Data security in medical computer systems.

    PubMed

    White, R

    1986-10-01

    A computer is secure if it works reliably and if problems that do arise can be corrected easily. The steps that can be taken to ensure hardware, software, procedural, physical, and legal security are outlined. Most computer systems are vulnerable because their operators do not have sufficient procedural safeguards in place.

  16. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    NASA Technical Reports Server (NTRS)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  17. Quantitative computer simulations of extraterrestrial processing operations

    NASA Technical Reports Server (NTRS)

    Vincent, T. L.; Nikravesh, P. E.

    1989-01-01

    The automation of a small, solid propellant mixer was studied. Temperature control is under investigation. A numerical simulation of the system is under development and will be tested using different control options. Control system hardware is currently being put into place. The construction of mathematical models and simulation techniques for understanding various engineering processes is also studied. Computer graphics packages were utilized for better visualization of the simulation results. The mechanical mixing of propellants is examined. Simulation of the mixing process is being done to study how one can control for chaotic behavior to meet specified mixing requirements. An experimental mixing chamber is also being built. It will allow visual tracking of particles under mixing. The experimental unit will be used to test ideas from chaos theory, as well as to verify simulation results. This project has applications to extraterrestrial propellant quality and reliability.

  18. Automating ATLAS Computing Operations using the Site Status Board

    NASA Astrophysics Data System (ADS)

    J, Andreeva; Iglesias C, Borrego; S, Campana; Girolamo A, Di; I, Dzhunov; Curull X, Espinal; S, Gayazov; E, Magradze; M, Nowotka M.; L, Rinaldi; P, Saiz; J, Schovancova; A, Stewart G.; M, Wright

    2012-12-01

    The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses the SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. The ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The paper will describe how the SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in the SSB. It will demonstrate the positive impact of the use of the SSB on the overall performance of ATLAS computing activities and will overview future plans.

  19. Parametric Optimization of Some Critical Operating System Functions--An Alternative Approach to the Study of Operating Systems Design

    ERIC Educational Resources Information Center

    Sobh, Tarek M.; Tibrewal, Abhilasha

    2006-01-01

    Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…

  20. Computer Jet-Engine-Monitoring System

    NASA Technical Reports Server (NTRS)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  1. A multiprocessor operating system simulator

    SciTech Connect

    Johnston, G.M.; Campbell, R.H. . Dept. of Computer Science)

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT and T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the Choices family of operating systems for loosely and tightly coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.

  2. A Multiprocessor Operating System Simulator

    NASA Technical Reports Server (NTRS)

    Johnston, Gary M.; Campbell, Roy H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall semester of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT&T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the 'Choices' family of operating systems for loosely- and tightly-coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.

  3. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  4. SPECTR System Operational Test Report

    SciTech Connect

    W.H. Landman Jr.

    2011-08-01

    This report overviews installation of the Small Pressure Cycling Test Rig (SPECTR) and documents the system operational testing performed to demonstrate that it meets the requirements for operations. The system operational testing involved operation of the furnace system to the design conditions and demonstration of the test article gas supply system using a simulated test article. The furnace and test article systems were demonstrated to meet the design requirements for the Next Generation Nuclear Plant. Therefore, the system is deemed acceptable and is ready for actual test article testing.

  5. Chaining direct memory access data transfer operations for compute nodes in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.

    2010-09-28

    Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet.

  6. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  7. Transportation System Concept of Operations

    SciTech Connect

    N. Slater-Thompson

    2006-08-16

    The Nuclear Waste Policy Act of 1982 (NWPA), as amended, authorized the DOE to develop and manage a Federal system for the disposal of SNF and HLW. OCRWM was created to manage acceptance and disposal of SNF and HLW in a manner that protects public health, safety, and the environment; enhances national and energy security; and merits public confidence. This responsibility includes managing the transportation of SNF and HLW from origin sites to the Repository for disposal. The Transportation System Concept of Operations is the core high-level OCRWM document written to describe the Transportation System integrated design and present the vision, mission, and goals for Transportation System operations. By defining the functions, processes, and critical interfaces of this system early in the system development phase, programmatic risks are minimized, system costs are contained, and system operations are better managed, safer, and more secure. This document also facilitates discussions and understanding among parties responsible for the design, development, and operation of the Transportation System. Such understanding is important for the timely development of system requirements and identification of system interfaces. Information provided in the Transportation System Concept of Operations includes: the functions and key components of the Transportation System; system component interactions; flows of information within the system; the general operating sequences; and the internal and external factors affecting transportation operations. The Transportation System Concept of Operations reflects OCRWM's overall waste management system policies and mission objectives, and as such provides a description of the preferred state of system operation. The description of general Transportation System operating functions in the Transportation System Concept of Operations is the first step in the OCRWM systems engineering process, establishing the starting point for the lower level

  8. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  9. System monitors discrete computer inputs

    NASA Technical Reports Server (NTRS)

    Burns, J. J.

    1966-01-01

    Computer system monitors inputs from checkout devices. The comparing, addressing, and controlling functions are performed in the I/O unit. This leaves the computer main frame free to handle memory, access priority, and interrupt instructions.

  10. Threats to Computer Systems

    DTIC Science & Technology

    1973-03-01

    subjects and objects of attacks contribute to the uniqueness of computer-related crime. For example, as the cashless , checkless society approaches...advancing computer tech- nology and security methods, and proliferation of computers in bringing about the paperless society . The universal use of...organizations do to society . Jerry Schneider, one of the known perpetrators, said that he was motivated to perform his acts to make money, for the

  11. Systemization of Secure Computation

    DTIC Science & Technology

    2015-11-01

    studied MPC paradigm. 15. SUBJECT TERMS Garbled Circuits, Secure Multiparty Computation, SMC, Multiparty Computation, MPC, Server- aided computation 16...that may well happen for non-trivial input sizes and algorithms. One way to allow mobile devices to perform 2P-SFE is to use a server- aided ...Previous cryptographic work in a 3-party model (also referred as commodity-based, server-assisted, server- aided model) seems to have originated in [1], with

  12. Central nervous system and computation.

    PubMed

    Guidolin, Diego; Albertin, Giovanna; Guescini, Michele; Fuxe, Kjell; Agnati, Luigi F

    2011-12-01

    Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.

  13. Multiple operating system rotation environment moving target defense

    DOEpatents

    Evans, Nathaniel; Thompson, Michael

    2016-03-22

    Systems and methods for providing a multiple operating system rotation environment ("MORE") moving target defense ("MTD") computing system are described. The MORE-MTD system provides enhanced computer system security through a rotation of multiple operating systems. The MORE-MTD system increases attacker uncertainty, increases the cost of attacking the system, reduces the likelihood of an attacker locating a vulnerability, and reduces the exposure time of any located vulnerability. The MORE-MTD environment is effectuated by rotation of the operating systems at a given interval. The rotating operating systems create a consistently changing attack surface for remote attackers.

  14. Operating System Abstraction Layer (OSAL)

    NASA Technical Reports Server (NTRS)

    Yanchik, Nicholas J.

    2007-01-01

    This viewgraph presentation reviews the concept of the Operating System Abstraction Layer (OSAL) and its benefits. The OSAL is A small layer of software that allows programs to run on many different operating systems and hardware platforms It runs independent of the underlying OS & hardware and it is self-contained. The benefits of OSAL are that it removes dependencies from any one operating system, promotes portable, reusable flight software. It allows for Core Flight software (FSW) to be built for multiple processors and operating systems. The presentation discusses the functionality, the various OSAL releases, and describes the specifications.

  15. Data-Structuring Operations in Concurrent Computations.

    DTIC Science & Technology

    1979-10-01

    8217 (plWl)p(p2, 2). Every computation a in a job J is a prefix of a halted computation which is a permutatioi of a canonical computation wEJ . The approach...standard state S’, and halted firing sequence g starting in S, TI(S’,2) is not necessarily SOE-inclusive of r(S,2). This is because even though 2 is a - 347

  16. Automated Computer Access Request System

    NASA Technical Reports Server (NTRS)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  17. Port Operational Marine Observing System

    NASA Astrophysics Data System (ADS)

    Palazov, A.; Stefanov, A.; Slabakova, V.; Marinova, V.

    2009-04-01

    The Port Operational Marine Observing System (POMOS) is a network of distributed sensors and centralized data collecting, processing and distributing unit. The system is designed to allow for the real-time assessment of weather and marine conditions throughout the major Bulgarian ports: Varna, Burgas and Balchik, supporting thereby Maritime administration to secure safety navigation in bays, canals and ports. Real-time information within harbors is obtained using various sensors placed at thirteen strategic locations to monitor the current state of the environment. The most important for navigation weather and sea-state parameters are measured: wind speed and direction, air temperature, relative humidity, atmospheric pressure, visibility, solar radiation, water temperature and salinity, sea level, currents speed and direction, mean wave's parameters. The system consist of: 11 weather stations (3 with extra solar radiation and 4 with extra visibility measurement), 9 water temperature and salinity sensors, 9 sea-level stations, two sea currents and waves stations and two canal currents stations. All sensors are connected to communication system which provides direct intranet access to the instruments. Every 15 minutes measured data is transmitted in real-time to the central collecting system, where data is collected, processed and stored in database. Database is triple secured to prevent data losses. Data collection system is double secured. Measuring system is secured against short power failure and instability. Special software is designed to collect, store, process and present environmental data and information on different user-friendly screens. Access to data and information is through internet/intranet with the help of browsers. Actual data from all measurements or from separate measuring place can be displayed on the computer screens as well as data for the last 24 hours. Historical data are available using report server for extracting data for selectable

  18. Computer systems and software engineering

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  19. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  20. Automatic system for computer program documentation

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.; Elliott, R. W.; Arseven, S.; Colunga, D.

    1972-01-01

    Work done on a project to design an automatic system for computer program documentation aids was made to determine what existing programs could be used effectively to document computer programs. Results of the study are included in the form of an extensive bibliography and working papers on appropriate operating systems, text editors, program editors, data structures, standards, decision tables, flowchart systems, and proprietary documentation aids. The preliminary design for an automated documentation system is also included. An actual program has been documented in detail to demonstrate the types of output that can be produced by the proposed system.

  1. Computer monitors and controls all truck-shovel operations

    SciTech Connect

    Chironis, N.P.

    1985-03-01

    The intense competition in the coal industry and the advances in computer technology have led several large mines to consider computer dispatching systems as a means of optimizing production. Quintette Coal, Ltd., of Vancouver, B.C., has engaged Modular Mining Systems, Inc., of Tucson, to install a comprehensive truck-dispatch system at a new, multiseam mine northeast of Vancouver. This open-pit operation will rely on truck-shovel teams to uncover both steam and metallurgical coal. The mine is already using about 12 shovels and 50 trucks to produce 3 million tpy. By 1986, production will hit 5 million tpy of metallurgical coal and 1.3 million tpy of steam coal. The coal is under contract to be shipped to Japan. Denison Mines Ltd., owns 50% of Quintette Coal. Of the other 14 shareholders, 10 are Japanese steel companies. Although about 10 non-coal mines worldwide are using some form of computer-controlled dispatching system, Quintette is the first coal company to do so and western US mines are reportedly studying the Quintette system carefully.

  2. GPU computing for systems biology.

    PubMed

    Dematté, Lorenzo; Prandi, Davide

    2010-05-01

    The development of detailed, coherent, models of complex biological systems is recognized as a key requirement for integrating the increasing amount of experimental data. In addition, in-silico simulation of bio-chemical models provides an easy way to test different experimental conditions, helping in the discovery of the dynamics that regulate biological systems. However, the computational power required by these simulations often exceeds that available on common desktop computers and thus expensive high performance computing solutions are required. An emerging alternative is represented by general-purpose scientific computing on graphics processing units (GPGPU), which offers the power of a small computer cluster at a cost of approximately $400. Computing with a GPU requires the development of specific algorithms, since the programming paradigm substantially differs from traditional CPU-based computing. In this paper, we review some recent efforts in exploiting the processing power of GPUs for the simulation of biological systems.

  3. Evaluating operating system vulnerability to memory errors.

    SciTech Connect

    Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  4. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  5. A COMPUTERIZED OPERATOR SUPPORT SYSTEM PROTOTYPE

    SciTech Connect

    Thomas A. Ulrich; Roger Lew; Ronald L. Boring; Ken Thomas

    2015-03-01

    A computerized operator support system (COSS) is proposed for use in nuclear power plants to assist control room operators in addressing time-critical plant upsets. A COSS is a collection of technologies to assist operators in monitoring overall plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. A prototype COSS was developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based on four underlying elements consisting of a digital alarm system, computer-based procedures, piping and instrumentation diagram system representations, and a recommender module for mitigation actions. The initial version of the prototype is now operational at the Idaho National Laboratory using the Human System Simulation Laboratory.

  6. GODAE Systems in Operation

    DTIC Science & Technology

    2009-10-09

    NMEFC, (II) RTOFS and (12) TOPAZ . 15. SUBJECT TERMS GODAE, NLOM, ocean data assimilation systems 16. SECURITY CLASSIFICATION OF: a. REPORT...NCEP + dim RTOFS HYCOM North and Tropical Atlantic (> 25°S) 4-18 km 26 hybrid layers NCEP 3-hourly TOPAZ HYCOM Atlantic and Arctic 11-16 km 22...Along-track Jason + Envisat T/S profiles TOPAZ Ensemble Kalman filter 100 members Reynolds SST SLA MAPS from Jason + Envisat + GFO from Ssalto

  7. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1973-01-01

    The TENEX computer system, the ARPA network, and computer language design technology was applied to support the complex system programs. By combining the pragmatic and theoretical aspects of robot development, an approach is created which is grounded in realism, but which also has at its disposal the power that comes from looking at complex problems from an abstract analytical point of view.

  8. Network operating system focus technology

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An activity structured to provide specific design requirements and specifications for the Space Station Data Management System (DMS) Network Operating System (NOS) is outlined. Examples are given of the types of supporting studies and implementation tasks presently underway to realize a DMS test bed capability to develop hands-on understanding of NOS requirements as driven by actual subsystem test beds participating in the overall Johnson Space Center test bed program. Classical operating system elements and principal NOS functions are listed.

  9. Computer-Supported Co-operative Learning.

    ERIC Educational Resources Information Center

    Florea, Adina Magda

    1998-01-01

    Discusses the impact of computer-supported cooperative work tools in the creation of educational environments and the facilities such tools bring to teaching methods, and examines the relationship between new techniques and the learner-centered, active learning approach in higher education. The importance of collaborative learning in this context…

  10. Operating System For Numerically Controlled Milling Machine

    NASA Technical Reports Server (NTRS)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  11. Redefining Tactical Operations for MER Using Cloud Computing

    NASA Technical Reports Server (NTRS)

    Joswig, Joseph C.; Shams, Khawaja S.

    2011-01-01

    The Mars Exploration Rover Mission (MER) includes the twin rovers, Spirit and Opportunity, which have been performing geological research and surface exploration since early 2004. The rovers' durability well beyond their original prime mission (90 sols or Martian days) has allowed them to be a valuable platform for scientific research for well over 2000 sols, but as a by-product it has produced new challenges in providing efficient and cost-effective tactical operational planning. An early stage process adaptation was the move to distributed operations as mission scientists returned to their places of work in the summer of 2004, but they would still came together via teleconference and connected software to plan rover activities a few times a week. This distributed model has worked well since, but it requires the purchase, operation, and maintenance of a dedicated infrastructure at the Jet Propulsion Laboratory. This server infrastructure is costly to operate and the periodic nature of its usage (typically heavy usage for 8 hours every 2 days) has made moving to a cloud based tactical infrastructure an extremely tempting proposition. In this paper we will review both past and current implementations of the tactical planning application focusing on remote plan saving and discuss the unique challenges present with long-latency, distributed operations. We then detail the motivations behind our move to cloud based computing services and as well as our system design and implementation. We will discuss security and reliability concerns and how they were addressed

  12. Executing a gather operation on a parallel computer

    DOEpatents

    Archer, Charles J [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2012-03-20

    Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network.

  13. An Overview of Two Recent Surveys of Administrative Computer Operations in Higher Education.

    ERIC Educational Resources Information Center

    Mann, Richard L.; And Others

    This document summarizes the results of two surveys about the current administrative uses of computers in higher education. Included in the document is: (1) a brief history of the development of computer operational and management information systems in higher education; (2) information on how computers are currently being used to support…

  14. Operator versus computer control of adaptive automation

    NASA Technical Reports Server (NTRS)

    Hilburn, Brian; Molloy, Robert; Wong, Dick; Parasuraman, Raja

    1993-01-01

    Adaptive automation refers to real-time allocation of functions between the human operator and automated subsystems. The article reports the results of a series of experiments whose aim is to examine the effects of adaptive automation on operator performance during multi-task flight simulation, and to provide an empirical basis for evaluations of different forms of adaptive logic. The combined results of these studies suggest several things. First, it appears that either excessively long, or excessively short, adaptation cycles can limit the effectiveness of adaptive automation in enhancing operator performance of both primary flight and monitoring tasks. Second, occasional brief reversions to manual control can counter some of the monitoring inefficiency typically associated with long cycle automation, and further, that benefits of such reversions can be sustained for some time after return to automated control. Third, no evidence was found that the benefits of such reversions depend on the adaptive logic by which long-cycle adaptive switches are triggered.

  15. Selecting a Cable System Operator.

    ERIC Educational Resources Information Center

    Cable Television Information Center, Washington, DC.

    Intended to assist franchising authorities with the process of selecting a cable television system operator from franchise applicants, this document provides a framework for analysis of individual applications. Section 1 deals with various methods which can be used to select an operator. The next section covers the application form, the vehicle a…

  16. Determining collective barrier operation skew in a parallel computer

    SciTech Connect

    Faraj, Daniel A.

    2015-11-24

    Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

  17. Computer Network Attack: An Operational Tool?

    DTIC Science & Technology

    2007-11-02

    Spectrum of Conflict, Cyber Warfare , Preemptive Strike, Effects Based Targeting. 15. Abstract: Computer Network Attack (CNA) is defined as...great deal of attention as the world’s capabilities in cyber - warfare grow. 11 Although addressing the wide ranging legal aspects of CNA is beyond the...the notion of cyber - warfare has not yet developed to the point that international norms have been established.15 These norms will be developed in

  18. The UCLA MEDLARS computer system.

    PubMed

    Garvis, F J

    1966-01-01

    Under a subcontract with UCLA the Planning Research Corporation has changed the MEDLARS system to make it possible to use the IBM 7094/7040 direct-couple computer instead of the Honeywell 800 for demand searches. The major tasks were the rewriting of the programs in COBOL and copying of the stored information on the narrower tapes that IBM computers require. (In the future NLM will copy the tapes for IBM computer users.) The differences in the software required by the two computers are noted. Major and costly revisions would be needed to adapt the large MEDLARS system to the smaller IBM 1401 and 1410 computers. In general, MEDLARS is transferrable to other computers of the IBM 7000 class, the new IBM 360, and those of like size, such as the CDC 1604 or UNIVAC 1108, although additional changes are necessary. Potential future improvements are suggested.

  19. Refurbishment program of HANARO control computer system

    SciTech Connect

    Kim, H. K.; Choe, Y. S.; Lee, M. W.; Doo, S. K.; Jung, H. S.

    2012-07-01

    HANARO, an open-tank-in-pool type research reactor with 30 MW thermal power, achieved its first criticality in 1995. The programmable controller system MLC (Multi Loop Controller) manufactured by MOORE has been used to control and regulate HANARO since 1995. We made a plan to replace the control computer because the system supplier no longer provided technical support and thus no spare parts were available. Aged and obsolete equipment and the shortage of spare parts supply could have caused great problems. The first consideration for a replacement of the control computer dates back to 2007. The supplier did not produce the components of MLC so that this system would no longer be guaranteed. We established the upgrade and refurbishment program in 2009 so as to keep HANARO up to date in terms of safety. We designed the new control computer system that would replace MLC. The new computer system is HCCS (HANARO Control Computer System). The refurbishing activity is in progress and will finish in 2013. The goal of the refurbishment program is a functional replacement of the reactor control system in consideration of suitable interfaces, compliance with no special outage for installation and commissioning, and no change of the well-proved operation philosophy. HCCS is a DCS (Discrete Control System) using PLC manufactured by RTP. To enhance the reliability, we adapt a triple processor system, double I/O system and hot swapping function. This paper describes the refurbishment program of the HANARO control system including the design requirements of HCCS. (authors)

  20. Computer Automated Ultrasonic Inspection System

    DTIC Science & Technology

    1985-02-06

    Microcomputer CRT Cathode Ray Tube SBC Single Board Computer xiii 1.0 INTRODUCTION 1.1 Background Standard ultrasonic inspection techniques used in industry...30 Microcomputer The heart of the bridge control microcomputer is an Intel single board computer using a high-speed 8085 HA-2 microprocessor chip ...subsystems (bridge, bridge drive electronics, bridge control microcomputer , ultrasonic unit, and master computer system), development of bridge control and

  1. NASA's computed tomography system

    NASA Astrophysics Data System (ADS)

    Engel, H. Peter

    1989-03-01

    The computerized industrial tomographic analyzer (CITA) is designed to examine the internal structure and material integrity of a wide variety of aerospace-related objects, particularly in the NASA space program. The nondestructive examination is performed by producing a two-dimensional picture of a selected slice through an object. The penetrating sources that yield data for reconstructing the slice picture are radioactive cobalt or a high-power X-ray tube. A series of pictures and computed tomograms are presented which illustrate a few of the applications the CITA has been used for since its August 1986 initial service at the Kennedy Space Center.

  2. Lewis hybrid computing system, users manual

    NASA Technical Reports Server (NTRS)

    Bruton, W. M.; Cwynar, D. S.

    1979-01-01

    The Lewis Research Center's Hybrid Simulation Lab contains a collection of analog, digital, and hybrid (combined analog and digital) computing equipment suitable for the dynamic simulation and analysis of complex systems. This report is intended as a guide to users of these computing systems. The report describes the available equipment' and outlines procedures for its use. Particular is given to the operation of the PACER 100 digital processor. System software to accomplish the usual digital tasks such as compiling, editing, etc. and Lewis-developed special purpose software are described.

  3. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    SciTech Connect

    Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.; Gary, Jeff D.; Hack, James J.; McNally, Stephen T.; Rogers, James H.; Smith, Brian E.; Straatsma, T. P.; Sukumar, Sreenivas Rangan; Thach, Kevin G.; Tichenor, Suzy; Vazhkudai, Sudharshan S.; Wells, Jack C.

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  4. Students "Hacking" School Computer Systems

    ERIC Educational Resources Information Center

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  5. Computation and design of autonomous intelligent systems

    NASA Astrophysics Data System (ADS)

    Fry, Robert L.

    2008-04-01

    This paper describes a theory of intelligent systems and its reduction to engineering practice. The theory is based on a broader theory of computation wherein information and control are defined within the subjective frame of a system. At its most primitive level, the theory describes what it computationally means to both ask and answer questions which, like traditional logic, are also Boolean. The logic of questions describes the subjective rules of computation that are objective in the sense that all the described systems operate according to its principles. Therefore, all systems are autonomous by construct. These systems include thermodynamic, communication, and intelligent systems. Although interesting, the important practical consequence is that the engineering framework for intelligent systems can borrow efficient constructs and methodologies from both thermodynamics and information theory. Thermodynamics provides the Carnot cycle which describes intelligence dynamics when operating in the refrigeration mode. It also provides the principle of maximum entropy. Information theory has recently provided the important concept of dual-matching useful for the design of efficient intelligent systems. The reverse engineered model of computation by pyramidal neurons agrees well with biology and offers a simple and powerful exemplar of basic engineering concepts.

  6. Computer controlled antenna system

    NASA Technical Reports Server (NTRS)

    Raumann, N. A.

    1972-01-01

    Digital techniques are discussed for application to the servo and control systems of large antennas. The tracking loop for an antenna at a STADAN tracking site is illustrated. The augmentation mode is also considered.

  7. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  8. User computer system pilot project

    SciTech Connect

    Eimutis, E.C.

    1989-09-06

    The User Computer System (UCS) is a general purpose unclassified, nonproduction system for Mound users. The UCS pilot project was successfully completed, and the system currently has more than 250 users. Over 100 tables were installed on the UCS for use by subscribers, including tables containing data on employees, budgets, and purchasing. In addition, a UCS training course was developed and implemented.

  9. Redesigning the District Operating System

    ERIC Educational Resources Information Center

    Hodas, Steven

    2015-01-01

    In this paper, we look at the inner workings of a school district through the lens of the "district operating system (DOS)," a set of interlocking mutually-reinforcing modules that includes functions like procurement, contracting, data and IT policy, the general counsel's office, human resources, and the systems for employee and family…

  10. Prototype operational earthquake prediction system

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    An objective if the U.S. Earthquake Hazards Reduction Act of 1977 is to introduce into all regions of the country that are subject to large and moderate earthquakes, systems for predicting earthquakes and assessing earthquake risk. In 1985, the USGS developed for the Secretary of the Interior a program for implementation of a prototype operational earthquake prediction system in southern California.

  11. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next

  12. UFO (UnFold Operator) computer program abstract

    SciTech Connect

    Kissel, L.; Biggs, F.

    1982-11-01

    UFO (UnFold Operator) is an interactive user-oriented computer program designed to solve a wide range of problems commonly encountered in physical measurements. This document provides a summary of the capabilities of version 3A of UFO.

  13. Operator`s manual for the multistation offgas analysis system

    SciTech Connect

    Hayes, A.B.; Basford, J.A.

    1994-10-10

    The Multistation Offgas Analysis System (MOAS) is a fully automated instrument which can independently measure the gases emitted by up to four samples in containers heated in ovens. A mass spectrometer is used to determine accurately their flow rates. There are six interconnected high vacuum pumping stations, one for each of the sample containers, one for the mass spectrometer, and one for the calibrated leaks which are used to calibrate the mass spectrometer. Quadstar 421{trademark} is the software package marketed by Balzers, the manufacturer of the mass spectrometer. This software used by MOAS is a combination of Quadstar 421{trademark}, special routines, called sequences in the Balzers nomenclature, and compiled programs that controls MOAS. Tests are run repeatedly on each of the four oven stations, while stations that are not ready, or do not have a sample are skipped. While the computer is sitting idle between tests, the software monitors the vacuum system. If necessary, the software will shut down a pumping station that is not operating correctly. The status of pumping stations and tests, filenames for data and oven temperatures are stored on disk, so the software can recover from a power failure. The operator can use the software to start or stop testing, load parts, perform calibrations as necessary, and start up or shut down pumping stations. The software aids in routine functions, such as changing an ion gauge, by performing the required valve actuations automatically, and checking the pressure readings and turbopump speeds as necessary. The computer takes mass spectrometer sensitivity calibration data automatically, every week at 12:00AM on Sunday. An operator does not need to be present for these calibrations. The calibration data are saved and may be used or deleted by the operator at a later date.

  14. Operations Monitoring Assistant System Design

    DTIC Science & Technology

    1986-07-01

    Logic. Artificial Inteligence 25(1)::75-94. January.18. 41 -Nils J. Nilsson. Problem-Solving Methods In Artificli Intelligence. .klcG raw-Hill B3ook...operations monitoring assistant (OMA) system is designed that combines operations research, artificial intelligence, and human reasoning techniques and...KnowledgeCraft (from Carnegie Group), and 5.1 (from Teknowledze). These tools incorporate the best methods of applied artificial intelligence, and

  15. Telemetry Computer System at Wallops Flight Center

    NASA Technical Reports Server (NTRS)

    Bell, H.; Strock, J.

    1980-01-01

    This paper describes the Telemetry Computer System in operation at NASA's Wallops Flight Center for real-time or off-line processing, storage, and display of telemetry data from rockets and aircraft. The system accepts one or two PCM data streams and one FM multiplex, converting each type of data into computer format and merging time-of-day information. A data compressor merges the active streams, and removes redundant data if desired. Dual minicomputers process data for display, while storing information on computer tape for further processing. Real-time displays are located at the station, at the rocket launch control center, and in the aircraft control tower. The system is set up and run by standard telemetry software under control of engineers and technicians. Expansion capability is built into the system to take care of possible future requirements.

  16. Design and Effectiveness of Intelligent Tutors for Operators of Complex Dynamic Systems: A Tutor Implementation for Satellite System Operators.

    ERIC Educational Resources Information Center

    Mitchell, Christine M.; Govindaraj, T.

    1990-01-01

    Discusses the use of intelligent tutoring systems as opposed to traditional on-the-job training for training operators of complex dynamic systems and describes the computer architecture for a system for operators of a NASA (National Aeronautics and Space Administration) satellite control system. An experimental evaluation with college students is…

  17. WEBtop (Operating Systems on Web)

    NASA Astrophysics Data System (ADS)

    Sharma, M. K.; Kumar, Rajeev

    2011-12-01

    WebOS (Web based operating system) is a new form of Operating Systems. You can use your desktop as a virtual desktop on the web, accessible via a browser, with multiple integrated built-in applications that allow the user to easily manage and organize her data from any location. Desktop on web can be named as WEBtop. This paper starts with a introduction of WebOS and its benefits. For this paper, We have reviewed some most interesting WebOS available nowadays and tried to provide a detailed description of their features. We have identified some parameters as comparison criteria among them. A technical review is given with research design and future goals to design better web based operating systems is a part of this study. Findings of the study conclude this paper.

  18. Cronus, A Distributed Operating System: Functional Definition and System Concept.

    DTIC Science & Technology

    1984-02-01

    Digital Equipment Corporation VAX computer running the VMS operating system) which are enhanced and or modified to integrate the host in’o the DOS Thus...Interprocessor communicatio f data. - Multi-level data security With the exception of multi-level the scope of this project, our sy areas. The first two

  19. The Computer-Aided Analytic Process Model. Operations Handbook for the Analytic Process Model Demonstration Package

    DTIC Science & Technology

    1986-01-01

    Research Note 86-06 THE COMPUTER-AIDED ANALYTIC PROCESS MODEL : OPERATIONS HANDBOOK FOR THE ANALYTIC PROCESS MODEL DE ONSTRATION PACKAGE Ronald G...ic Process Model ; Operations Handbook; Tutorial; Apple; Systems Taxonomy Mod--l; Training System; Bradl1ey infantry Fighting * Vehicle; BIFV...8217. . . . . . . .. . . . . . . . . . . . . . . . * - ~ . - - * m- .. . . . . . . item 20. Abstract -continued companion volume-- "The Analytic Process Model for

  20. Computer vision applied to vehicle operation

    SciTech Connect

    Metzler, H.G.

    1988-01-01

    Among many tasks of car development, safety, economy, environmental benefits and convenience, safety should have a high priority. One of the main goals is the reduction of the number of accidents. Environment and situation recognition by autonomous vehicle-electronic systems can contribute to the recognition of problems together with information to the driver or direct intervention in the car's behaviour. This paper describes some techniques for environment recognition, the status of a present project, and the goals of some PROMETHEUS (Program for a European Traffic with Highest Efficiency and Unprecedented Safety) projects.

  1. Advanced Transport Operating Systems Program

    NASA Technical Reports Server (NTRS)

    White, John J.

    1990-01-01

    NASA-Langley's Advanced Transport Operating Systems Program employs a heavily instrumented, B 737-100 as its Transport Systems Research Vehicle (TRSV). The TRSV has been used during the demonstration trials of the Time Reference Scanning Beam Microwave Landing System (TRSB MLS), the '4D flight-management' concept, ATC data links, and airborne windshear sensors. The credibility obtainable from successful flight test experiments is often a critical factor in the granting of substantial commitments for commercial implementation by the FAA and industry. In the case of the TRSB MLS, flight test demonstrations were decisive to its selection as the standard landing system by the ICAO.

  2. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated...

  3. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated...

  4. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated...

  5. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated...

  6. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  7. Monitoring data transfer latency in CMS computing operations

    SciTech Connect

    Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo; Sartirana, A.; Taze, Meric; Wildish, Tony

    2015-12-23

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, and to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.

  8. Monitoring data transfer latency in CMS computing operations

    DOE PAGES

    Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo; ...

    2015-12-23

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less

  9. The Advanced Technology Operations System: ATOS

    NASA Technical Reports Server (NTRS)

    Kaufeler, J.-F.; Laue, H. A.; Poulter, K.; Smith, H.

    1993-01-01

    Mission control systems supporting new space missions face ever-increasing requirements in terms of functionality, performance, reliability and efficiency. Modern data processing technology is providing the means to meet these requirements in new systems under development. During the past few years the European Space Operations Centre (ESOC) of the European Space Agency (ESA) has carried out a number of projects to demonstrate the feasibility of using advanced software technology, in particular, knowledge based systems, to support mission operations. A number of advances must be achieved before these techniques can be moved towards operational use in future missions, namely, integration of the applications into a single system framework and generalization of the applications so that they are mission independent. In order to achieve this goal, ESA initiated the Advanced Technology Operations System (ATOS) program, which will develop the infrastructure to support advanced software technology in mission operations, and provide applications modules to initially support: Mission Preparation, Mission Planning, Computer Assisted Operations, and Advanced Training. The first phase of the ATOS program is tasked with the goal of designing and prototyping the necessary system infrastructure to support the rest of the program. The major components of the ATOS architecture is presented. This architecture relies on the concept of a Mission Information Base (MIB) as the repository for all information and knowledge which will be used by the advanced application modules in future mission control systems. The MIB is being designed to exploit the latest in database and knowledge representation technology in an open and distributed system. In conclusion the technological and implementation challenges expected to be encountered, as well as the future plans and time scale of the project, are presented.

  10. Adaptive Fuzzy Systems in Computational Intelligence

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1996-01-01

    In recent years, the interest in computational intelligence techniques, which currently includes neural networks, fuzzy systems, and evolutionary programming, has grown significantly and a number of their applications have been developed in the government and industry. In future, an essential element in these systems will be fuzzy systems that can learn from experience by using neural network in refining their performances. The GARIC architecture, introduced earlier, is an example of a fuzzy reinforcement learning system which has been applied in several control domains such as cart-pole balancing, simulation of to Space Shuttle orbital operations, and tether control. A number of examples from GARIC's applications in these domains will be demonstrated.

  11. Method and apparatus of parallel computing with simultaneously operating stream prefetching and list prefetching engines

    SciTech Connect

    Boyle, Peter A.; Christ, Norman H.; Gara, Alan; Mawhinney, Robert D.; Ohmacht, Martin; Sugavanam, Krishnan

    2012-12-11

    A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command.

  12. EOS Operations Systems: EDOS Implemented Changes to Reduce Operations Costs

    NASA Technical Reports Server (NTRS)

    Cordier, Guy R.; Gomez-Rosa, Carlos; McLemore, Bruce D.

    2007-01-01

    The authors describe in this paper the progress achieved to-date with the reengineering of the Earth Observing System (EOS) Data and Operations System (EDOS), the experience gained in the process and the ensuing reduction of ground systems operations costs. The reengineering effort included a major methodology change, applying to an existing schedule driven system, a data-driven system approach.

  13. Basic Operational Robotics Instructional System

    NASA Technical Reports Server (NTRS)

    Todd, Brian Keith; Fischer, James; Falgout, Jane; Schweers, John

    2013-01-01

    The Basic Operational Robotics Instructional System (BORIS) is a six-degree-of-freedom rotational robotic manipulator system simulation used for training of fundamental robotics concepts, with in-line shoulder, offset elbow, and offset wrist. BORIS is used to provide generic robotics training to aerospace professionals including flight crews, flight controllers, and robotics instructors. It uses forward kinematic and inverse kinematic algorithms to simulate joint and end-effector motion, combined with a multibody dynamics model, moving-object contact model, and X-Windows based graphical user interfaces, coordinated in the Trick Simulation modeling environment. The motivation for development of BORIS was the need for a generic system for basic robotics training. Before BORIS, introductory robotics training was done with either the SRMS (Shuttle Remote Manipulator System) or SSRMS (Space Station Remote Manipulator System) simulations. The unique construction of each of these systems required some specialized training that distracted students from the ideas and goals of the basic robotics instruction.

  14. SIRTF Science Operations System Design

    NASA Technical Reports Server (NTRS)

    Green, William

    1999-01-01

    SIRTF Science Operations System Design William B. Green Manager, SIRTF Science Center California Institute of Technology M/S 310-6 1200 E. California Blvd., Pasadena CA 91125 (626) 395 8572 Fax (626) 568 0673 bgreen@ipac.caltech.edu. The Space Infrared Telescope Facility (SIRTF) will be launched in December 2001, and perform an extended series of science observations at wavelengths ranging from 20 to 160 microns for five years or more. The California Institute of Technology has been selected as the home for the SIRTF Science Center (SSC). The SSC will be responsible for evaluating and selecting observation proposals, providing technical support to the science community, performing mission planning and science observation scheduling activities, instrument calibration during operations and instrument health monitoring, production of archival quality data products, and management of science research grants. The science payload consists of three instruments delivered by instrument Principal Investigators located at University of Arizona, Cornell, and Harvard Smithsonian Astrophysical Observatory. The SSC is responsible for design, development, and operation of the Science Operations System (SOS) which will support the functions assigned to the SSC by NASA. The SIRTF spacecraft, mission profile, and science instrument design have undergone almost ten years of refinement. SIRTF development and operations activities are highly cost constrained. The cost constraints have impacted the design of the SOS in several ways. The Science Operations System has been designed to incorporate a set of efficient, easy to use tools which will make it possible for scientists to propose observation sequences in a rapid and automated manner. The use of highly automated tools for requesting observations will simplify the long range observatory scheduling process, and the short term scheduling of science observations. Pipeline data processing will be highly automated and data

  15. Chromosome breakage and sister chromatid exchange analysis in computer operators

    SciTech Connect

    Butler, M.G.; Yost, J.; Jenkins, B.B.

    1987-01-01

    Chromosome breakage analysis with Mitomycin C (MMC) and sister chromatid exchanges (SCE) were obtained on 10 computer operators with computer exposure for a minimum of 3 hours per day for 4 years and 10 control subjects matched for age and personal lifestyle. No difference was found between the two groups in the total number of chromatid and chromosome aberrations in cells grown at 48 and/or 96 hours in Mitomycin C (20 or 50 ng/ml-final concentration). The average number of SCE per cell in approximately 30 cells from each person was 6.4 +/- 1.1 (mean +/- standard deviation) for the computer operators and 9.2 +/- 1.6 for the controls. This difference was significant (p < .001). The replicative index was significantly higher (p < .01) in computer operators than in control subjects. The number of SCE appeared not to be influenced by the years of computer exposure. Additional studies with larger sample sizes will be needed to identify if significant differences exist in cell kinetics and sister chromatid exchanges in individuals employed as computer operators.

  16. Models and Measurements of Parallelism for a Distributed Computer System.

    DTIC Science & Technology

    1982-01-01

    that parallel execution of the processes comprising an application program will defray U the overhead costs of distributed computing . This...of Different Approaches to Distributed Computing ", Proceedings of the Ist International Conference on Distributed Comput er Systems, Huntsville, AL...Oct. 1-5, 1979), pp. 222-232. [20] Liskov, B., "Primitives for Distributed Computing ", Froceedings of the 7--th Symposium on Operating System

  17. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  18. Reproducibility of neuroimaging analyses across operating systems.

    PubMed

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  19. Space System Applications to Tactical Operations.

    DTIC Science & Technology

    1984-10-01

    have computers in your offices -- or even at home. Over the past 25 years, we have witnessed an explosion in computer technology with processing speeds... process of placing small computers at operational units to aid in routine tasks such as mission planning. These capabilities exist today largely because of...Programs to develop very high speed integrated circuits, of VHSIC, will push computer processing speeds beyond 30 million operations per second, will

  20. Top 10 Threats to Computer Systems Include Professors and Students

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  1. Policy Information System Computer Program.

    ERIC Educational Resources Information Center

    Hamlin, Roger E.; And Others

    The concepts and methodologies outlined in "A Policy Information System for Vocational Education" are presented in a simple computer format in this booklet. It also contains a sample output representing 5-year projections of various planning needs for vocational education. Computerized figures in the eight areas corresponding to those in the…

  2. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.

  3. Computer-controlled radiation monitoring system

    SciTech Connect

    Homann, S.G.

    1994-09-27

    A computer-controlled radiation monitoring system was designed and installed at the Lawrence Livermore National Laboratory`s Multiuser Tandem Laboratory (10 MV tandem accelerator from High Voltage Engineering Corporation). The system continuously monitors the photon and neutron radiation environment associated with the facility and automatically suspends accelerator operation if preset radiation levels are exceeded. The system has proved reliable real-time radiation monitoring over the past five years, and has been a valuable tool for maintaining personnel exposure as low as reasonably achievable.

  4. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    The development of a computer problem solving system is reported that considers physical problems faced by an artificial robot moving around in a complex environment. Fundamental interaction constraints with a real environment are simulated for the robot by visual scan and creation of an internal environmental model. The programming system used in constructing the problem solving system for the simulated robot and its simulated world environment is outlined together with the task that the system is capable of performing. A very general framework for understanding the relationship between an observed behavior and an adequate description of that behavior is included.

  5. Cronus: A Distributed Operating System.

    DTIC Science & Technology

    1983-11-01

    423 machine interface. -24- -- mass- Report No. 5086 Bolt Beranek and Newman Inc. standard operating systems (e.g., a Digital Equipment Corporation VAX...One from Ungermann-Bass, Inc. o ProNet from Proteon Associates -120- Report No. 5086 Bolt Beranek and Newman Inc. o PolyNet from Logica , Inc. o...configuration. Polynet from Logica , Inc. Polynet is a commercial version of the Cambridge University Ring Network that has become quite popular in the

  6. National Ignition Facility integrated computer control system

    SciTech Connect

    Van Arsdall, P.J., LLNL

    1998-06-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  7. National Ignition Facility integrated computer control system

    NASA Astrophysics Data System (ADS)

    Van Arsdall, Paul J.; Bettenhausen, R. C.; Holloway, Frederick W.; Saroyan, R. A.; Woodruff, J. P.

    1999-07-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control system. The framework provides an open, extensive architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. THe ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensor to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  8. Towards molecular computers that operate in a biological environment

    NASA Astrophysics Data System (ADS)

    Kahan, Maya; Gil, Binyamin; Adar, Rivka; Shapiro, Ehud

    2008-07-01

    important consequences when performed in a proper context. We envision that molecular computers that operate in a biological environment can be the basis of “smart drugs”, which are potent drugs that activate only if certain environmental conditions hold. These conditions could include abnormalities in the molecular composition of the biological environment that are indicative of a particular disease. Here we review the research direction that set this vision and attempts to realize it.

  9. Time Warp Operating System, Version 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.; Younger, Herbert C.

    1993-01-01

    Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.

  10. A computer system for geosynchronous satellite navigation

    NASA Technical Reports Server (NTRS)

    Koch, D. W.

    1980-01-01

    A computer system specifically designed to estimate and predict Geostationary Operational Environmental Satellite (GOES-4) navigation parameters using Earth imagery is described. The estimates are needed for spacecraft maneuvers while prediction provide the capability for near real-time image registration. System software is composed of four functional subsystems: (1) data base management; (2) image processing; (3) navigation; and (4) output. Hardware consists of a host minicomputer, a cathode ray tube terminal, a graphics/video display unit, and associated input/output peripherals. System validity is established through the processing of actual imagery obtained by sensors on board the Synchronous Meteorological Satellite (SMS-2). Results indicate the system is capable of operationally providing both accurate GOES-4 navigation estimates and images with a potential registration accuracy of several picture elements (pixels).

  11. Operating the Worldwide LHC Computing Grid: current and future challenges

    NASA Astrophysics Data System (ADS)

    Flix Molina, J.; Forti, A.; Girone, M.; Sciaba, A.

    2014-06-01

    The Wordwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse their data. It includes almost 200,000 CPU cores, 200 PB of disk storage and 200 PB of tape storage distributed among more than 150 sites. The WLCG operations team is responsible for several essential tasks, such as the coordination of testing and deployment of Grid middleware and services, communication with the experiments and the sites, followup and resolution of operational issues and medium/long term planning. In 2012 WLCG critically reviewed all operational procedures and restructured the organisation of the operations team as a more coherent effort in order to improve its efficiency. In this paper we describe how the new organisation works, its recent successes and the changes to be implemented during the long LHC shutdown in preparation for the LHC Run 2.

  12. A Computuerized Operator Support System Prototype

    SciTech Connect

    Ken Thomas; Ronald Boring; Roger Lew; Tom Ulrich; Richard Villim

    2013-11-01

    A report was published by the Idaho National Laboratory in September of 2012, entitled Design to Achieve Fault Tolerance and Resilience, which described the benefits of automating operator actions for transients. The report identified situations in which providing additional automation in lieu of operator actions would be advantageous. It recognized that managing certain plant upsets is sometimes limited by the operator’s ability to quickly diagnose the fault and to take the needed actions in the time available. Undoubtedly, technology is underutilized in the nuclear power industry for operator assistance during plant faults and operating transients. In contrast, other industry sectors have amply demonstrated that various forms of operator advisory systems can enhance operator performance while maintaining the role and responsibility of the operator as the independent and ultimate decision-maker. A computerized operator support system (COSS) is proposed for use in nuclear power plants to assist control room operators in addressing time-critical plant upsets. A COSS is a collection of technologies to assist operators in monitoring overall plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. The COSS does not supplant the role of the operator, but rather provides rapid assessments, computations, and recommendations to reduce workload and augment operator judgment and decision-making during fast-moving, complex events. This project proposes a general model for a control room COSS that addresses a sequence of general tasks required to manage any plant upset: detection, validation, diagnosis, recommendation, monitoring, and recovery. The model serves as a framework for assembling a set of technologies that can be interrelated to assist with each of these tasks. A prototype COSS has been developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based

  13. A Computuerized Operator Support System Prototype

    SciTech Connect

    Ken Thomas; Ronald Boring; Roger Lew; Tom Ulrich; Richard Villim

    2013-08-01

    A report was published by the Idaho National Laboratory in September of 2012, entitled Design to Achieve Fault Tolerance and Resilience, which described the benefits of automating operator actions for transients. The report identified situations in which providing additional automation in lieu of operator actions would be advantageous. It recognized that managing certain plant upsets is sometimes limited by the operator’s ability to quickly diagnose the fault and to take the needed actions in the time available. Undoubtedly, technology is underutilized in the nuclear power industry for operator assistance during plant faults and operating transients. In contrast, other industry sectors have amply demonstrated that various forms of operator advisory systems can enhance operator performance while maintaining the role and responsibility of the operator as the independent and ultimate decision-maker. A computerized operator support system (COSS) is proposed for use in nuclear power plants to assist control room operators in addressing time-critical plant upsets. A COSS is a collection of technologies to assist operators in monitoring overall plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. The COSS does not supplant the role of the operator, but rather provides rapid assessments, computations, and recommendations to reduce workload and augment operator judgment and decision-making during fast-moving, complex events. This project proposes a general model for a control room COSS that addresses a sequence of general tasks required to manage any plant upset: detection, validation, diagnosis, recommendation, monitoring, and recovery. The model serves as a framework for assembling a set of technologies that can be interrelated to assist with each of these tasks. A prototype COSS has been developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based

  14. Computer software design description for the Treated Effluent Disposal Facility (TEDF), Project L-045H, Operator Training Station (OTS)

    SciTech Connect

    Carter, R.L. Jr.

    1994-11-07

    The Treated Effluent Disposal Facility (TEDF) Operator Training Station (OTS) is a computer-based training tool designed to aid plant operations and engineering staff in familiarizing themselves with the TEDF Central Control System (CCS).

  15. Computer control system of TRISTAN

    NASA Astrophysics Data System (ADS)

    Akiyama, A.; Ishii, K.; Kadokura, E.; Katoh, T.; Kikutani, E.; Kimura, Y.; Komada, I.; Kudo, K.; Kurokawa, S.; Oide, K.; Takeda, S.; Uchino, K.

    The 8 GeV accumulation ring and the 30 GeV × 30 GeV main ring of TRISTAN, an accelerator-storage ring complex at KEK, are controlled by a single computer system. About twenty minicomputers (Hitachi HIDIC 80-E's) are linked to each other by optical fiber cables to form an N-to-N token-passing ring network of 10 Mbps transmission speed. The software system is based on the NODAL interpreter developed at CERN SPS. The KEK version of NODAL uses the compiler-interpreter method to increase its execution speed. In addition to it, a multi-computer file system, a screen editor, and dynamic linkage of datamodules and functions are the characteristics of KEK NODAL.

  16. The Impact of Computer-Based Training on Operating and Support Costs for the AN/SQQ-89 (v) Sonar System

    DTIC Science & Technology

    2013-04-01

    that library , located online at www.acquisitionresearch.net, at a rate of roughly 140 reports per year. This activity has engaged researchers at over...Security Cooperation Agency  Deputy Assistant Secretary of the Navy, Research, Development, Test, & Evaluation  Program Executive Officer, Tactical...the macro level, the training demands are driven by the Required Operational Capabilities and Projected Operating Environments (ROC/ POE ). ROC/ POE is

  17. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Merriam, E. W.; Becker, J. D.

    1973-01-01

    A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.

  18. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  19. Fault tolerant hypercube computer system architecture

    NASA Technical Reports Server (NTRS)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  20. Operational Management System for Regulated Water Systems

    NASA Astrophysics Data System (ADS)

    van Loenen, A.; van Dijk, M.; van Verseveld, W.; Berger, H.

    2012-04-01

    Most of the Dutch large rivers, canals and lakes are controlled by the Dutch water authorities. The main reasons concern safety, navigation and fresh water supply. Historically the separate water bodies have been controlled locally. For optimizating management of these water systems an integrated approach was required. Presented is a platform which integrates data from all control objects for monitoring and control purposes. The Operational Management System for Regulated Water Systems (IWP) is an implementation of Delft-FEWS which supports operational control of water systems and actively gives advice. One of the main characteristics of IWP is that is real-time collects, transforms and presents different types of data, which all add to the operational water management. Next to that, hydrodynamic models and intelligent decision support tools are added to support the water managers during their daily control activities. An important advantage of IWP is that it uses the Delft-FEWS framework, therefore processes like central data collection, transformations, data processing and presentation are simply configured. At all control locations the same information is readily available. The operational water management itself gains from this information, but it can also contribute to cost efficiency (no unnecessary pumping), better use of available storage and advise during (water polution) calamities.

  1. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  2. Computer-aided system design

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  3. Program MASTERCALC: an interactive computer program for radioanalytical computations. Description and operating instructions

    SciTech Connect

    Goode, W.

    1980-10-01

    MASTERCALC is a computer program written to support radioanalytical computations in the Los Alamos Scientific Laboratory (LASL) Environmental Surveillance Group. Included in the program are routines for gross alpha and beta, /sup 3/H, gross gamma, /sup 90/Sr and alpha spectroscopic determinations. A description of MASTERCALC is presented and its source listing is included. Operating instructions and example computing sessions are given for each type of analysis.

  4. A space transportation system operations model

    NASA Technical Reports Server (NTRS)

    Morris, W. Douglas; White, Nancy H.

    1987-01-01

    Presented is a description of a computer program which permits assessment of the operational support requirements of space transportation systems functioning in both a ground- and space-based environment. The scenario depicted provides for the delivery of payloads from Earth to a space station and beyond using upper stages based at the station. Model results are scenario dependent and rely on the input definitions of delivery requirements, task times, and available resources. Output is in terms of flight rate capabilities, resource requirements, and facility utilization. A general program description, program listing, input requirements, and sample output are included.

  5. Mathemagical Computing: Order of Operations and New Software.

    ERIC Educational Resources Information Center

    Ecker, Michael W.

    1989-01-01

    Describes mathematical problems which occur when using the computer as a calculator. Considers errors in BASIC calculation and the order of mathematical operations. Identifies errors in spreadsheet and calculator programs. Comments on sorting programs and provides a source for Mathemagical Black Holes. (MVL)

  6. CMS Monte Carlo production operations in a distributed computing environment

    SciTech Connect

    Mohapatra, A.; Lazaridis, C.; Hernandez, J.M.; Caballero, J.; Hof, C.; Kalinin, S.; Flossdorf, A.; Abbrescia, M.; De Filippis, N.; Donvito, G.; Maggi, G.; /Bari U. /INFN, Bari /INFN, Pisa /Vrije U., Brussels /Brussels U. /Imperial Coll., London /CERN /Princeton U. /Fermilab

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  7. Computer Pure-Tone and Operator Stress: Report III.

    ERIC Educational Resources Information Center

    Dow, Caroline; Covert, Douglas C.

    Pure-tone sound at 15,750 Herz generated by flyback transformers in many computer and video display terminal (VDT) monitors has stress-related productivity effects in some operators, especially women. College-age women in a controlled experiment simulating half a normal work day showed responses within the first half hour of exposure to a tone…

  8. Operation plan for the data 100/LARS terminal system

    NASA Technical Reports Server (NTRS)

    Bowen, A. J., Jr.

    1980-01-01

    The Data 100/LARS terminal system provides an interface for processing on the IBM 3031 computer system at Purdue University's Laboratory for Applications of Remote Sensing. The environment in which the system is operated and supported is discussed. The general support responsibilities, procedural mechanisms, and training established for the benefit of the system users are defined.

  9. A universal computer control system for motors

    NASA Astrophysics Data System (ADS)

    Szakaly, Zoltan F.

    1991-09-01

    A control system for a multi-motor system such as a space telerobot, having a remote computational node and a local computational node interconnected with one another by a high speed data link is described. A Universal Computer Control System (UCCS) for the telerobot is located at each node. Each node is provided with a multibus computer system which is characterized by a plurality of processors with all processors being connected to a common bus, and including at least one command processor. The command processor communicates over the bus with a plurality of joint controller cards. A plurality of direct current torque motors, of the type used in telerobot joints and telerobot hand-held controllers, are connected to the controller cards and responds to digital control signals from the command processor. Essential motor operating parameters are sensed by analog sensing circuits and the sensed analog signals are converted to digital signals for storage at the controller cards where such signals can be read during an address read/write cycle of the command processing processor.

  10. A universal computer control system for motors

    NASA Technical Reports Server (NTRS)

    Szakaly, Zoltan F. (Inventor)

    1991-01-01

    A control system for a multi-motor system such as a space telerobot, having a remote computational node and a local computational node interconnected with one another by a high speed data link is described. A Universal Computer Control System (UCCS) for the telerobot is located at each node. Each node is provided with a multibus computer system which is characterized by a plurality of processors with all processors being connected to a common bus, and including at least one command processor. The command processor communicates over the bus with a plurality of joint controller cards. A plurality of direct current torque motors, of the type used in telerobot joints and telerobot hand-held controllers, are connected to the controller cards and responds to digital control signals from the command processor. Essential motor operating parameters are sensed by analog sensing circuits and the sensed analog signals are converted to digital signals for storage at the controller cards where such signals can be read during an address read/write cycle of the command processing processor.

  11. Satellite operations support expert system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Satellite Operations Support Expert System is an effort to identify aspects of satellite ground support activity which could profitably be automated with artificial intelligence (AI) and to develop a feasibility demonstration for the automation of one such area. The hydrazine propulsion subsystems (HPS) of the International Sun Earth Explorer (ISEE) and the International Ultraviolet Explorer (IUS) were used as applications domains. A demonstration fault handling system was built. The system was written in Franz Lisp and is currently hosted on a VAX 11/750-11/780 family machine. The system allows the user to select which HPS (either from ISEE or IUE) is used. Then the user chooses the fault desired for the run. The demonstration system generates telemetry corresponding to the particular fault. The completely separate fault handling module then uses this telemetry to determine what and where the fault is and how to work around it. Graphics are used to depict the structure of the HPS, and the telemetry values displayed on the screen are continually updated. The capabilities of this system and its development cycle are described.

  12. Some queuing network models of computer systems

    NASA Technical Reports Server (NTRS)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  13. An Operational TWSTT Monitoring System

    DTIC Science & Technology

    1997-12-01

    29th Annual Precise Time and Time Interval (PTTI) Meeting AN OPERATIONAL TWSTT MONITORING SYSTEM I?. Mai and J. A. DeYoung U.S. Naval Observatory... Time Service Department 3450 Massachusetts Avenue NW Washington, DC 20392-5240 USA phu@simon.usno .navy .mil dey@herschel.usno .navy .mil The...US. Naval Ohmamy (USNO) Time %vice (TS) uses the AOA TWT-100 AtIantis Modem J&r its most important Taw-Way SateEte Time Transfer (TWSTlJ appkcations

  14. Effectiveness evaluation of STOL transport operations (phase 2). [computer simulation program of commercial short haul aircraft operations

    NASA Technical Reports Server (NTRS)

    Welp, D. W.; Brown, R. A.; Ullman, D. G.; Kuhner, M. B.

    1974-01-01

    A computer simulation program which models a commercial short-haul aircraft operating in the civil air system was developed. The purpose of the program is to evaluate the effect of a given aircraft avionics capability on the ability of the aircraft to perform on-time carrier operations. The program outputs consist primarily of those quantities which can be used to determine direct operating costs. These include: (1) schedule reliability or delays, (2) repairs/replacements, (3) fuel consumption, and (4) cancellations. More comprehensive models of the terminal area environment were added and a simulation of an existing airline operation was conducted to obtain a form of model verification. The capability of the program to provide comparative results (sensitivity analysis) was then demonstrated by modifying the aircraft avionics capability for additional computer simulations.

  15. PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM

    NASA Technical Reports Server (NTRS)

    Roberts, F. E.

    1994-01-01

    The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the

  16. Millimeter wave transmissometer computer system

    SciTech Connect

    Wiberg, J.D.; Widener, K.B.

    1990-04-01

    A millimeter wave transmissometer has been designed and built by the Pacific Northwest Laboratory in Richland, Washington for the US Army at the Dugway Proving Grounds in Dugway, Utah. This real-time data acquisition and control system is used to test and characterize battlefield obscurants according to the transmittance of electromagnetic radiation in the millimeter wavelengths. It is an advanced five-frequency instrumentation radar system consisting of a transceiver van and a receiver van deployed at opposite sides of a test grid. The transceiver computer systems is the successful integration of a Digital Equipment Corporation (DEC) VAX 8350, multiple VME bus systems with Motorola M68020 processors (one for each radar frequency), an IEEE-488 instrumentation bus, and an Aptec IOC-24 I/O computer. The software development platforms are the VAX 8350 and an IBM PC/AT. A variety of compilers, cross-assemblers, microcode assemblers, and linkers were employed to facilitate development of the system software. Transmittance measurements from each radar are taken forty times per second under control of a VME based M68020.

  17. Man-Computer Interactive Data Access System (McIDAS). Continued development of McIDAS and operation in the GARP Atlantic tropical experiment

    NASA Technical Reports Server (NTRS)

    Suomi, V. E.

    1975-01-01

    The complete output of the Synchronous Meteorological Satellite was recorded on one inch magnetic tape. A quality control subsystem tests cloud track vectors against four sets of criteria: (1) rejection if best match occurs on correlation boundary; (2) rejection if major correlation peak is not distinct and significantly greater than secondary peak; (3) rejection if correlation is not persistent; and (4) rejection if acceleration is too great. A cloud height program determines cloud optical thickness from visible data and computer infrared emissivity. From infrared data and temperature profile, cloud height is determined. A functional description and electronic schematics of equipment are given.

  18. Test, Control and Monitor System (TCMS) operations plan

    NASA Technical Reports Server (NTRS)

    Macfarlane, C. K.; Conroy, M. P.

    1993-01-01

    The purpose is to provide a clear understanding of the Test, Control and Monitor System (TCMS) operating environment and to describe the method of operations for TCMS. TCMS is a complex and sophisticated checkout system focused on support of the Space Station Freedom Program (SSFP) and related activities. An understanding of the TCMS operating environment is provided and operational responsibilities are defined. NASA and the Payload Ground Operations Contractor (PGOC) will use it as a guide to manage the operation of the TCMS computer systems and associated networks and workstations. All TCMS operational functions are examined. Other plans and detailed operating procedures relating to an individual operational function are referenced within this plan. This plan augments existing Technical Support Management Directives (TSMD's), Standard Practices, and other management documentation which will be followed where applicable.

  19. Non-developmental item computer systems and the malicious software threat

    NASA Technical Reports Server (NTRS)

    Bown, Rodney L.

    1991-01-01

    The following subject areas are covered: a DOD development system - the Army Secure Operating System; non-development commercial computer systems; security, integrity, and assurance of service (SI and A); post delivery SI and A and malicious software; computer system unique attributes; positive feedback to commercial computer systems vendors; and NDI (Non-Development Item) computers and software safety.

  20. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  1. Embedded systems for supporting computer accessibility.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  2. Thermoelectric property measurements with computer controlled systems

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Wood, C.

    1984-01-01

    A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.

  3. Checkpoint triggering in a computer system

    SciTech Connect

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  4. Systems engineering considerations for operational support systems

    NASA Technical Reports Server (NTRS)

    Aller, Robert O.

    1993-01-01

    Operations support as considered here is the infrastructure of people, procedures, facilities and systems that provide NASA with the capability to conduct space missions. This infrastructure involves most of the Centers but is concentrated principally at the Johnson Space Center, the Kennedy Space Center, the Goddard Space Flight Center, and the Jet Propulsion Laboratory. It includes mission training and planning, launch and recovery, mission control, tracking, communications, data retrieval and data processing.

  5. Advanced Space Surface Systems Operations

    NASA Technical Reports Server (NTRS)

    Huffaker, Zachary Lynn; Mueller, Robert P.

    2014-01-01

    The importance of advanced surface systems is becoming increasingly relevant in the modern age of space technology. Specifically, projects pursued by the Granular Mechanics and Regolith Operations (GMRO) Lab are unparalleled in the field of planetary resourcefulness. This internship opportunity involved projects that support properly utilizing natural resources from other celestial bodies. Beginning with the tele-robotic workstation, mechanical upgrades were necessary to consider for specific portions of the workstation consoles and successfully designed in concept. This would provide more means for innovation and creativity concerning advanced robotic operations. Project RASSOR is a regolith excavator robot whose primary objective is to mine, store, and dump regolith efficiently on other planetary surfaces. Mechanical adjustments were made to improve this robot's functionality, although there were some minor system changes left to perform before the opportunity ended. On the topic of excavator robots, the notes taken by the GMRO staff during the 2013 and 2014 Robotic Mining Competitions were effectively organized and analyzed for logistical purposes. Lessons learned from these annual competitions at Kennedy Space Center are greatly influential to the GMRO engineers and roboticists. Another project that GMRO staff support is Project Morpheus. Support for this project included successfully producing mathematical models of the eroded landing pad surface for the vertical testbed vehicle to predict a timeline for pad reparation. And finally, the last project this opportunity made contribution to was Project Neo, a project exterior to GMRO Lab projects, which focuses on rocket propulsion systems. Additions were successfully installed to the support structure of an original vertical testbed rocket engine, thus making progress towards futuristic test firings in which data will be analyzed by students affiliated with Rocket University. Each project will be explained in

  6. CAESY - COMPUTER AIDED ENGINEERING SYSTEM

    NASA Technical Reports Server (NTRS)

    Wette, M. R.

    1994-01-01

    Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from research.att.com. CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  7. Success with an automated computer control system

    NASA Astrophysics Data System (ADS)

    Roberts, M. L.; Moore, T. L.

    1991-05-01

    LLNL has successfully implemented a distributed computer control system for automated operation of an FN tandem accelerator. The control system software utilized is the Thaumaturgic Automated Control Logic (TACL) written by the Continuous Electron Beam Accelerator Facility and co-developed with LLNL. Using TACL, accelerator components are controlled through CAMAC using a two-tiered structure. Analog control and measurement are at 12 or 16 bit precision as appropriate. Automated operation has been implemented for several nuclear analytical techniques including hydrogen depth profiling and accelerator mass Spectrometry. An additional advantage of TACL lies in its expansion capabilities. Without disturbing existing control definitions and algorithms, additional control algorithms and display functions can be implemented quickly.

  8. Reducing power consumption while performing collective operations on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  9. Software Systems for High-performance Quantum Computing

    SciTech Connect

    Humble, Travis S; Britt, Keith A

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  10. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  11. Operation of the Computer Software Management and Information Center (COSMIC)

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The major operational areas of the COSMIC center are described. Quantitative data on the software submittals, program verification, and evaluation are presented. The dissemination activities are summarized. Customer services and marketing activities of the center for the calendar year are described. Those activities devoted to the maintenance and support of selected programs are described. A Customer Information system, the COSMIC Abstract Recording System Project, and the COSMIC Microfiche Project are summarized. Operational cost data are summarized.

  12. Computer auditing of surgical operative reports written in English.

    PubMed

    Lamiell, J M; Wojcik, Z M; Isaacks, J

    1993-01-01

    We developed a script-based scheme for automated auditing of natural language surgical operative reports. Suitable operations (appendectomy and breast biopsy) were selected, then audit criteria and operation scripts conforming with our audit criteria were developed. Our LISP parser was context and expectation sensitive. Parsed sentences were represented by semigraph structures and placed in a textual database to improve efficiency. Sentence ambiguities were resolved by matching the narrative textual database to the script textual database and employing the Uniform Medical Language System (UMLS) Knowledge Sources. All audit criteria questions were successfully answered for typical operative reports by matching parsed audit questions to the textual database.

  13. When does a physical system compute?

    PubMed

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  14. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  15. Distributed computing system with dual independent communications paths between computers and employing split tokens

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  16. Prognostic Analysis System and Methods of Operation

    NASA Technical Reports Server (NTRS)

    MacKey, Ryan M. E. (Inventor); Sneddon, Robert (Inventor)

    2014-01-01

    A prognostic analysis system and methods of operating the system are provided. In particular, a prognostic analysis system for the analysis of physical system health applicable to mechanical, electrical, chemical and optical systems and methods of operating the system are described herein.

  17. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  18. The computer emergency response team system (CERT-System)

    SciTech Connect

    Schultz, E.E.

    1991-10-11

    This paper describes CERT-System, an international affiliation of computer security response teams. Formed after the WANK and OILZ worms attacked numerous systems connected to the Internet, an operational charter was signed by representatives of 11 response teams. This affiliation's purpose is to provide a forum for ideas about incident response and computer security, share information, solve common problems, and develop strategies for responding to threats, incidents, etc. The achievements and advantages of participation in CERT-System are presented along with suggested growth areas for this affiliation. The views presented in this paper are the views of one member, and do not necessarily represent the views of others affiliated with CERT-System.

  19. Activities and operations of the Advanced Computing Research Facility, July-October 1986

    SciTech Connect

    Pieper, G.W.

    1986-01-01

    Research activities and operations of the Advanced Computing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advanced computer systems at the Advanced Computing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

  20. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  1. Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1982-06-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems...incorrect. Additionally, although fault-tolerance is usually listed as an advantage of distributed computing systems, little has been done to analyze

  2. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-07-09

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: establishing, for each node, a plurality of logical rings, each ring including a different set of at least one core on that node, each ring including the cores on at least two of the nodes; iteratively for each node: assigning each core of that node to one of the rings established for that node to which the core has not previously been assigned, and performing, for each ring for that node, a global allreduce operation using contribution data for the cores assigned to that ring or any global allreduce results from previous global allreduce operations, yielding current global allreduce results for each core; and performing, for each node, a local allreduce operation using the global allreduce results.

  3. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  4. System optimization of gasdynamic lasers, computer program user's manual

    NASA Technical Reports Server (NTRS)

    Otten, L. J., III; Saunders, R. C., III; Morris, S. J.

    1978-01-01

    The user's manual for a computer program that performs system optimization of gasdynamic lasers is provided. Detailed input/output formats are CDC 7600/6600 computers using a dialect of FORTRAN. Sample input/output data are provided to verify correct program operation along with a program listing.

  5. Ergonomic intervention for improving work postures during notebook computer operation.

    PubMed

    Jamjumrus, Nuchrawee; Nanthavanij, Suebsak

    2008-06-01

    This paper discusses the application of analytical algorithms to determine necessary adjustments for operating notebook computers (NBCs) and workstations so that NBC users can assume correct work postures during NBC operation. Twenty-two NBC users (eleven males and eleven females) were asked to operate their NBCs according to their normal work practice. Photographs of their work postures were taken and analyzed using the Rapid Upper Limb Assessment (RULA) technique. The algorithms were then employed to determine recommended adjustments for their NBCs and workstations. After implementing the necessary adjustments, the NBC users were then re-seated at their workstations, and photographs of their work postures were re-taken, to perform the posture analysis. The results show that the NBC users' work postures are improved when their NBCs and workstations are adjusted according to the recommendations. The effectiveness of ergonomic intervention is verified both visually and objectively.

  6. Parallelizing Sylvester-like operations on a distributed memory computer

    SciTech Connect

    Hu, D.Y.; Sorensen, D.C.

    1994-12-31

    Discretization of linear operators arising in applied mathematics often leads to matrices with the following structure: M(x) = (D {circle_times} A + B {circle_times} I{sub n} + V)x, where x {element_of} R{sup mn}, B, D {element_of} R{sup nxn}, A {element_of} R{sup mxm} and V {element_of} R{sup mnxmn}; both D and V are diagonal. For the notational convenience, the authors assume that both A and B are symmetric. All the results through this paper can be easily extended to the cases with general A and B. The linear operator on R{sup mn} defined above can be viewed as a generalization of the Sylvester operator: S(x) = (I{sub m} {circle_times} A + B {circle_times} I{sub n})x. The authors therefore refer to it as a Sylvester-like operator. The schemes discussed in this paper therefore also apply to Sylvester operator. In this paper, the authors present the SIMD scheme for parallelization of the Sylvester-like operator on a distributed memory computer. This scheme is designed to approach the best possible efficiency by avoiding unnecessary communication among processors.

  7. 9 CFR 205.201 - System operator.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Interpretive Opinions § 205.201 System operator. The system operator can be the Secretary of State of a State, or any designee of the State pursuant to its laws. Note that the provision in subsection (c)(2) for a system refers to operation by the Secretary of State of a State, but the definition in (c)(11)...

  8. 9 CFR 205.201 - System operator.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Interpretive Opinions § 205.201 System operator. The system operator can be the Secretary of State of a State, or any designee of the State pursuant to its laws. Note that the provision in subsection (c)(2) for a system refers to operation by the Secretary of State of a State, but the definition in (c)(11)...

  9. 9 CFR 205.201 - System operator.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Interpretive Opinions § 205.201 System operator. The system operator can be the Secretary of State of a State, or any designee of the State pursuant to its laws. Note that the provision in subsection (c)(2) for a system refers to operation by the Secretary of State of a State, but the definition in (c)(11)...

  10. Radiator design system computer programs

    NASA Technical Reports Server (NTRS)

    Wiggins, C. L.; Oren, J. A.; Dietz, J. B.

    1971-01-01

    Minimum weight space radiator subsystems which can operate over heat load ranges wider than the capabilities of current subsystems are investigated according to projected trends of future long duration space vehicles. Special consideration is given to maximum heat rejection requirements of the low temperature radiators needed for environmental control systems. The set of radiator design programs that have resulted from this investigation are presented in order to provide the analyst with a capability to generate optimum weight radiator panels or sets of panels from practical design considerations, including transient performance. Modifications are also provided for existing programs to improve capability and user convenience.

  11. Computer Aided Control System Design (CACSD)

    NASA Technical Reports Server (NTRS)

    Stoner, Frank T.

    1993-01-01

    The design of modern aerospace systems relies on the efficient utilization of computational resources and the availability of computational tools to provide accurate system modeling. This research focuses on the development of a computer aided control system design application which provides a full range of stability analysis and control design capabilities for aerospace vehicles.

  12. On Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1983-04-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems

  13. The Computer-Aided Analytic Process Model. Operations Handbook for the APM (Analytic Process Model) Demonstration Package. Appendix

    DTIC Science & Technology

    1986-01-01

    The Analytic Process Model for System Design and Measurement: A Computer-Aided Tool for Analyzing Training Systems and Other Human-Machine Systems. A...separate companion volume--The Computer-Aided Analytic Process Model : Operations Handbook for the APM Demonstration Package is also available under

  14. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    ERIC Educational Resources Information Center

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  15. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    PubMed

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  16. Advanced Transport Operating System (ATOPS) utility library software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  17. Satellite freeze forecast system. Operating/troubleshooting manual

    NASA Technical Reports Server (NTRS)

    Martsolf, J. D. (Principal Investigator)

    1983-01-01

    Examples of operational procedures are given to assist users of the satellites freeze forecasting system (SFFS) in logging in on to the computer, executing the programs in the menu, logging off the computer, and setting up the automatic system. Directions are also given for displaying, acquiring, and listing satellite maps; for communicating via terminal and monitor displays; and for what to do when the SFFS doesn't work. Administrative procedures are included.

  18. New Human-Computer Interface Concepts for Mission Operations

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey A.; Hoxie, Mary Sue; Gillen, Dave; Parkinson, Christopher; Breed, Julie; Nickens, Stephanie; Baitinger, Mick

    2000-01-01

    The current climate of budget cuts has forced the space mission operations community to reconsider how it does business. Gone are the days of building one-of-kind control centers with teams of controllers working in shifts 24 hours per day, 7 days per week. Increasingly, automation is used to significantly reduce staffing needs. In some cases, missions are moving towards lights-out operations where the ground system is run semi-autonomously. On-call operators are brought in only to resolve anomalies. Some operations concepts also call for smaller operations teams to manage an entire family of spacecraft. In the not too distant future, a skeleton crew of full-time general knowledge operators will oversee the operations of large constellations of small spacecraft, while geographically distributed specialists will be assigned to emergency response teams based on their expertise. As the operations paradigms change, so too must the tools to support the mission operations team's tasks. Tools need to be built not only to automate routine tasks, but also to communicate varying types of information to the part-time, generalist, or on-call operators and specialists more effectively. Thus, the proper design of a system's user-system interface (USI) becomes even more importance than before. Also, because the users will be accessing these systems from various locations (e.g., control center, home, on the road) via different devices with varying display capabilities (e.g., workstations, home PCs, PDAS, pagers) over connections with various bandwidths (e.g., dial-up 56k, wireless 9.6k), the same software must have different USIs to support the different types of users, their equipment, and their environments. In other words, the software must now adapt to the needs of the users! This paper will focus on the needs and the challenges of designing USIs for mission operations. After providing a general discussion of these challenges, the paper will focus on the current efforts of

  19. Head-mounted display systems and the special operations soldier

    NASA Astrophysics Data System (ADS)

    Loyd, Rodney B.

    1998-08-01

    In 1997, the Boeing Company, working with DARPA under the Smart Modules program and the US Army Soldier Systems Command, embarked on an advanced research and development program to develop a wearable computer system tailored for use with soldiers of the US Special Operations Command. The 'special operations combat management system' is a rugged advanced wearable tactical computer, designed to provide the special operations soldier with enhanced situation awareness and battlefield information capabilities. Many issues must be considered during the design of wearable computers for a combat soldier, including the system weight, placement on the body with respect to other equipment, user interfaces and display system characteristics. During the initial feasibility study for the system, the operational environment was examined and potential users were interviewed to establish the proper display solution for the system. Many display system requirements resulted, such as head or helmet mounting, Night Vision Goggle compatibility, minimal visible light emissions, environmental performance and even the need for handheld or other 'off the head' type display systems. This paper will address these issues and other end user requirements for display systems for applications in the harsh and demanding environment of the Special Operations soldier.

  20. Software simulator for multiple computer simulation system

    NASA Technical Reports Server (NTRS)

    Ogrady, E. P.

    1983-01-01

    A description is given of the structure and use of a computer program that simulates the operation of a parallel processor simulation system. The program is part of an investigation to determine algorithms that are suitable for simulating continous systems on a parallel processor configuration. The simulator is designed to accurately simulate the problem-solving phase of a simulation study. Care has been taken to ensure the integrity and correctness of data exchanges and to correctly sequence periods of computation and periods of data exchange. It is pointed out that the functions performed during a problem-setup phase or a reset phase are not simulated. In particular, there is no attempt to simulate the downloading process that loads object code into the local, transfer, and mapping memories of processing elements or the memories of the run control processor and the system control processor. The main program of the simulator carries out some problem-setup functions of the system control processor in that it requests the user to enter values for simulation system parameters and problem parameters. The method by which these values are transferred to the other processors, however, is not simulated.

  1. Impact of new computing systems on finite element computations

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  2. System for Computer Automated Typesetting (SCAT) of Computer Authored Texts.

    ERIC Educational Resources Information Center

    Keeler, F. Laurence

    This description of the System for Automated Typesetting (SCAT), an automated system for typesetting text and inserting special graphic symbols in programmed instructional materials created by the computer aided authoring system AUTHOR, provides an outline of the design architecture of the system and an overview including the component…

  3. On the computational implementation of forward and back-projection operations for cone-beam computed tomography.

    PubMed

    Karimi, Davood; Ward, Rabab

    2016-08-01

    Forward- and back-projection operations are the main computational burden in iterative image reconstruction in computed tomography. In addition, their implementation has to be accurate to ensure stable convergence to a high-quality image. This paper reviews and compares some of the variations in the implementation of these operations in cone-beam computed tomography. We compare four algorithms for computing the system matrix, including a distance-driven algorithm, an algorithm based on cubic basis functions, another based on spherically symmetric basis functions, and a voxel-driven algorithm. The focus of our study is on understanding how the choice of the implementation of the system matrix will influence the performance of iterative image reconstruction algorithms, including such factors as the noise strength and spatial resolution in the reconstructed image. Our experiments with simulated and real cone-beam data reveal the significance of the speed-accuracy trade-off in the implementation of the system matrix. Our results suggest that fast convergence of iterative image reconstruction methods requires accurate implementation of forward- and back-projection operations, involving a direct estimation of the convolution of the footprint of the voxel basis function with the surface of the detectors. The required accuracy decreases by increasing the resolution of the projection measurements beyond the resolution of the reconstructed image. Moreover, reconstruction of low-contrast objects needs more accurate implementation of these operations. Our results also show that, compared with regularized reconstruction methods, the behavior of iterative reconstruction algorithms that do not use a proper regularization is influenced more significantly by the implementation of the forward- and back-projection operations.

  4. Transient Faults in Computer Systems

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  5. Multiaxis, Lightweight, Computer-Controlled Exercise System

    NASA Technical Reports Server (NTRS)

    Haynes, Leonard; Bachrach, Benjamin; Harvey, William

    2006-01-01

    The multipurpose, multiaxial, isokinetic dynamometer (MMID) is a computer-controlled system of exercise machinery that can serve as a means for quantitatively assessing a subject s muscle coordination, range of motion, strength, and overall physical condition with respect to a wide variety of forces, motions, and exercise regimens. The MMID is easily reconfigurable and compactly stowable and, in comparison with prior computer-controlled exercise systems, it weighs less, costs less, and offers more capabilities. Whereas a typical prior isokinetic exercise machine is limited to operation in only one plane, the MMID can operate along any path. In addition, the MMID is not limited to the isokinetic (constant-speed) mode of operation. The MMID provides for control and/or measurement of position, force, and/or speed of exertion in as many as six degrees of freedom simultaneously; hence, it can accommodate more complex, more nearly natural combinations of motions and, in so doing, offers greater capabilities for physical conditioning and evaluation. The MMID (see figure) includes as many as eight active modules, each of which can be anchored to a floor, wall, ceiling, or other fixed object. A cable is payed out from a reel in each module to a bar or other suitable object that is gripped and manipulated by the subject. The reel is driven by a DC brushless motor or other suitable electric motor via a gear reduction unit. The motor can be made to function as either a driver or an electromagnetic brake, depending on the required nature of the interaction with the subject. The module includes a force and a displacement sensor for real-time monitoring of the tension in and displacement of the cable, respectively. In response to commands from a control computer, the motor can be operated to generate a required tension in the cable, to displace the cable a required distance, or to reel the cable in or out at a required speed. The computer can be programmed, either locally or via

  6. Dynamic Operations Wayfinding System (DOWS) for Nuclear Power Plants

    SciTech Connect

    Boring, Ronald Laurids; Ulrich, Thomas Anthony; Lew, Roger Thomas

    2015-08-01

    A novel software tool is proposed to aid reactor operators in respond- ing to upset plant conditions. The purpose of the Dynamic Operations Wayfind- ing System (DOWS) is to diagnose faults, prioritize those faults, identify paths to resolve those faults, and deconflict the optimal path for the operator to fol- low. The objective of DOWS is to take the guesswork out of the best way to combine procedures to resolve compound faults, mitigate low threshold events, or respond to severe accidents. DOWS represents a uniquely flexible and dy- namic computer-based procedure system for operators.

  7. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  8. Integrated Computer System of Management in Logistics

    NASA Astrophysics Data System (ADS)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  9. CARMENES instrument control system and operational scheduler

    NASA Astrophysics Data System (ADS)

    Garcia-Piquer, Alvaro; Guàrdia, Josep; Colomé, Josep; Ribas, Ignasi; Gesa, Lluis; Morales, Juan Carlos; Pérez-Calpena, Ana; Seifert, Walter; Quirrenbach, Andreas; Amado, Pedro J.; Caballero, José A.; Reiners, Ansgar

    2014-07-01

    The main goal of the CARMENES instrument is to perform high-accuracy measurements of stellar radial velocities (1m/s) with long-term stability. CARMENES will be installed in 2015 at the 3.5 m telescope in the Calar Alto Observatory (Spain) and it will be equipped with two spectrographs covering from the visible to the near-infrared. It will make use of its near-IR capabilities to observe late-type stars, whose peak of the spectral energy distribution falls in the relevant wavelength interval. The technology needed to develop this instrument represents a challenge at all levels. We present two software packages that play a key role in the control layer for an efficient operation of the instrument: the Instrument Control System (ICS) and the Operational Scheduler. The coordination and management of CARMENES is handled by the ICS, which is responsible for carrying out the operations of the different subsystems providing a tool to operate the instrument in an integrated manner from low to high user interaction level. The ICS interacts with the following subsystems: the near-IR and visible channels, composed by the detectors and exposure meters; the calibration units; the environment sensors; the front-end electronics; the acquisition and guiding module; the interfaces with telescope and dome; and, finally, the software subsystems for operational scheduling of tasks, data processing, and data archiving. We describe the ICS software design, which implements the CARMENES operational design and is planned to be integrated in the instrument by the end of 2014. The CARMENES operational scheduler is the second key element in the control layer described in this contribution. It is the main actor in the translation of the survey strategy into a detailed schedule for the achievement of the optimization goals. The scheduler is based on Artificial Intelligence techniques and computes the survey planning by combining the static constraints that are known a priori (i.e., target

  10. Electronic Medical Business Operations System

    SciTech Connect

    Cannon, D. T.; Metcalf, J. R.; North, M. P.; Richardson, T. L.; Underwood, S. A.; Shelton, P. M.; Ray, W. B.; Morrell, M. L.; Caldwell, III, D. C.

    2012-04-16

    Electronic Management of medical records has taken a back seat both in private industry and in the government. Record volumes continue to rise every day and management of these paper records is inefficient and very expensive. In 2005, the White House announced support for the development of electronic medical records across the federal government. In 2006, the DOE issued 10 CFR 851 requiring all medical records be electronically available by 2015. The Y-12 National Security Complex is currently investing funds to develop a comprehensive EMR to incorporate the requirements of an occupational health facility which are common across the Nuclear Weapons Complex (NWC). Scheduling, workflow, and data capture from medical surveillance, certification, and qualification examinations are core pieces of the system. The Electronic Medical Business Operations System (EMBOS) will provide a comprehensive health tool solution to 10 CFR 851 for Y-12 and can be leveraged to the Nuclear Weapon Complex (NWC); all site in the NWC must meet the requirements of 10 CFR 851 which states that all medical records must be electronically available by 2015. There is also potential to leverage EMBOS to the private4 sector. EMBOS is being developed and deployed in phases. When fully deployed the EMBOS will be a state-of-the-art web-enabled integrated electronic solution providing a complete electronic medical record (EMR). EMBOS has been deployed and provides a dynamic electronic medical history and surveillance program (e.g., Asbestos, Hearing Conservation, and Respirator Wearer) questionnaire. Table 1 below lists EMBOS capabilities and data to be tracked. Data to be tracked: Patient Demographics – Current/Historical; Physical Examination Data; Employee Medical Health History; Medical Surveillance Programs; Patient and Provider Schedules; Medical Qualification/Certifications; Laboratory Data; Standardized Abnormal Lab Notifications; Prescription Medication Tracking and Dispensing; Allergies

  11. The Secure Distributed Operating System Design Project

    DTIC Science & Technology

    1988-06-01

    of the need for specialized computers and the desire to keep up with the quickly changing hardware technology . Distributed system applications also...Best Available Technologies for Computer Security," IEEE Computer , vol. 16, no. 7, July 1983. [Landwehr et al. 84] Landwehr, C.E., Heitmeyer, C.L...3.6.1.2 Mandatory Security ..... ..................... 137 3.6.2 The SDOS Trusted Computing Base ............... 139 3.6.3 Detailed Description of the Major

  12. Computer-Assisted Education System for Psychopharmacology.

    ERIC Educational Resources Information Center

    McDougall, William Donald

    An approach to the use of computer assisted instruction (CAI) for teaching psychopharmacology is presented. A project is described in which, using the TUTOR programing language on the PLATO IV computer system, several computer programs were developed to demonstrate the concepts of aminergic transmitters in the central nervous system. Response…

  13. Laptop Computer - Based Facial Recognition System Assessment

    SciTech Connect

    R. A. Cain; G. B. Singleton

    2001-03-01

    The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results. After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in

  14. Computational biology approach to uncover hepatitis C virus helicase operation

    PubMed Central

    Flechsig, Holger

    2014-01-01

    Hepatitis C virus (HCV) helicase is a molecular motor that splits nucleic acid duplex structures during viral replication, therefore representing a promising target for antiviral treatment. Hence, a detailed understanding of the mechanism by which it operates would facilitate the development of efficient drug-assisted therapies aiming to inhibit helicase activity. Despite extensive investigations performed in the past, a thorough understanding of the activity of this important protein was lacking since the underlying internal conformational motions could not be resolved. Here we review investigations that have been previously performed by us for HCV helicase. Using methods of structure-based computational modelling it became possible to follow entire operation cycles of this motor protein in structurally resolved simulations and uncover the mechanism by which it moves along the nucleic acid and accomplishes strand separation. We also discuss observations from that study in the light of recent experimental studies that confirm our findings. PMID:24707123

  15. Computational biology approach to uncover hepatitis C virus helicase operation.

    PubMed

    Flechsig, Holger

    2014-04-07

    Hepatitis C virus (HCV) helicase is a molecular motor that splits nucleic acid duplex structures during viral replication, therefore representing a promising target for antiviral treatment. Hence, a detailed understanding of the mechanism by which it operates would facilitate the development of efficient drug-assisted therapies aiming to inhibit helicase activity. Despite extensive investigations performed in the past, a thorough understanding of the activity of this important protein was lacking since the underlying internal conformational motions could not be resolved. Here we review investigations that have been previously performed by us for HCV helicase. Using methods of structure-based computational modelling it became possible to follow entire operation cycles of this motor protein in structurally resolved simulations and uncover the mechanism by which it moves along the nucleic acid and accomplishes strand separation. We also discuss observations from that study in the light of recent experimental studies that confirm our findings.

  16. Autonomous Operations System: Development and Application

    NASA Technical Reports Server (NTRS)

    Toro Medina, Jaime A.; Wilkins, Kim N.; Walker, Mark; Stahl, Gerald M.

    2016-01-01

    Autonomous control systems provides the ability of self-governance beyond the conventional control system. As the complexity of mechanical and electrical systems increases, there develops a natural drive for developing robust control systems to manage complicated operations. By closing the bridge between conventional automated systems to knowledge based self-awareness systems, nominal control of operations can evolve into relying on safe critical mitigation processes to support any off-nominal behavior. Current research and development efforts lead by the Autonomous Propellant Loading (APL) group at NASA Kennedy Space Center aims to improve cryogenic propellant transfer operations by developing an automated control and health monitoring system. As an integrated systems, the center aims to produce an Autonomous Operations System (AOS) capable of integrating health management operations with automated control to produce a fully autonomous system.

  17. Computer Information System For Nuclear Medicine

    NASA Astrophysics Data System (ADS)

    Cahill, P. T.; Knowles, R. J.....; Tsen, O.

    1983-12-01

    To meet the complex needs of a nuclear medicine division serving a 1100-bed hospital, a computer information system has been developed in sequential phases. This database management system is based on a time-shared minicomputer linked to a broadband communications network. The database contains information on patient histories, billing, types of procedures, doses of radiopharmaceuticals, times of study, scanning equipment used, and technician performing the procedure. These patient records are cycled through three levels of storage: (a) an active file of 100 studies for those patients currently scheduled, (b) a temporary storage level of 1000 studies, and (c) an archival level of 10,000 studies containing selected information. Merging of this information with reports and various statistical analyses are possible. This first phase has been in operation for well over a year. The second phase is an upgrade of the size of the various storage levels by a factor of ten.

  18. Operating systems and network protocols for wireless sensor networks.

    PubMed

    Dutta, Prabal; Dunkels, Adam

    2012-01-13

    Sensor network protocols exist to satisfy the communication needs of diverse applications, including data collection, event detection, target tracking and control. Network protocols to enable these services are constrained by the extreme resource scarcity of sensor nodes-including energy, computing, communications and storage-which must be carefully managed and multiplexed by the operating system. These challenges have led to new protocols and operating systems that are efficient in their energy consumption, careful in their computational needs and miserly in their memory footprints, all while discovering neighbours, forming networks, delivering data and correcting failures.

  19. The engineering design integration (EDIN) system. [digital computer program complex

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  20. Computerized Operator Support System – Phase II Development

    SciTech Connect

    Ulrich, Thomas A.; Boring, Ronald L.; Lew, Roger T.; Thomas, Kenneth D.

    2015-02-01

    A computerized operator support system (COSS) prototype for nuclear control room process control is proposed and discussed. The COSS aids operators in addressing rapid plant upsets that would otherwise result in the shutdown of the power plant and interrupt electrical power generation, representing significant costs to the owning utility. In its current stage of development the prototype demonstrates four advanced functions operators can use to more efficiently monitor and control the plant. These advanced functions consist of: (1) a synthesized and intuitive high level overview display of system components and interrelations, (2) an enthalpy-based mathematical chemical and volume control system (CVCS) model to detect and diagnose component failures, (3) recommended strategies to mitigate component failure effects and return the plant back to pre-fault status, and (4) computer-based procedures to walk the operator through the recommended mitigation actions. The COSS was demonstrated to a group of operators and their feedback was collected. The operators responded positively to the COSS capabilities and features and indicated the system would be an effective operator aid. The operators also suggested several additional features and capabilities for the next iteration of development. Future versions of the COSS prototype will include additional plant systems, flexible computer-based procedure presentation formats, and support for simultaneous component fault diagnosis and dual fault synergistic mitigation action strategies to more efficiently arrest any plant upsets.

  1. Efficient simulation of open quantum system in duality quantum computing

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Long, Gui-Lu

    2016-11-01

    Practical quantum systems are open systems due to interactions with their environment. Understanding the evolution of open systems dynamics is important for quantum noise processes , designing quantum error correcting codes, and performing simulations of open quantum systems. Here we proposed an efficient quantum algorithm for simulating the evolution of an open quantum system on a duality quantum computer. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality algorithm, the time evolution of open quantum system is realized by using Kraus operators which is naturally realized in duality quantum computing. Compared to the Lloyd's quantum algorithm [Science.273, 1073(1996)] , the dependence on the dimension of the open quantum system in our algorithm is decreased. Moreover, our algorithm uses a truncated Taylor series of the evolution operators, exponentially improving the performance on the precision compared with existing quantum simulation algorithms with unitary evolution operations.

  2. Evaluation of computer-based ultrasonic inservice inspection systems

    SciTech Connect

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T.

    1994-03-01

    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

  3. Non-Discretionary Access Control for Decentralized Computing Systems

    DTIC Science & Technology

    1977-05-01

    Analysis and Enhancements of Computer Operating Systems, The RISOS Project, Lawrence Livermore Laboratory, Livermore, Ca., NBSIR 76-1041, National Bureau...301-307. 138 <Walter74> Walter , K. G., et al, Primitive Models for Computer Security, Case Western Reserve University, ESD-TR-74-117, HQ...Electronic Systems Division, Hanscom AFB, Ma., 23 January 1974. (NTIS# AD 778467) <Walter75> Walter , K. G., et al., Initial Structured Specifications for

  4. Safety Metrics for Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  5. Middleware in Modern High Performance Computing System Architectures

    SciTech Connect

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2007-01-01

    A recent trend in modern high performance computing (HPC) system architectures employs ''lean'' compute nodes running a lightweight operating system (OS). Certain parts of the OS as well as other system software services are moved to service nodes in order to increase performance and scalability. This paper examines the impact of this HPC system architecture trend on HPC ''middleware'' software solutions, which traditionally equip HPC systems with advanced features, such as parallel and distributed programming models, appropriate system resource management mechanisms, remote application steering and user interaction techniques. Since the approach of keeping the compute node software stack small and simple is orthogonal to the middleware concept of adding missing OS features between OS and application, the role and architecture of middleware in modern HPC systems needs to be revisited. The result is a paradigm shift in HPC middleware design, where single middleware services are moved to service nodes, while runtime environments (RTEs) continue to reside on compute nodes.

  6. Spatial Operator Algebra for multibody system dynamics

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Jain, A.; Kreutz-Delgado, K.

    1992-01-01

    The Spatial Operator Algebra framework for the dynamics of general multibody systems is described. The use of a spatial operator-based methodology permits the formulation of the dynamical equations of motion of multibody systems in a concise and systematic way. The dynamical equations of progressively more complex grid multibody systems are developed in an evolutionary manner beginning with a serial chain system, followed by a tree topology system and finally, systems with arbitrary closed loops. Operator factorizations and identities are used to develop novel recursive algorithms for the forward dynamics of systems with closed loops. Extensions required to deal with flexible elements are also discussed.

  7. Anaesthetists' role in computer keyboard contamination in an operating room.

    PubMed

    Fukada, T; Iwakiri, H; Ozaki, M

    2008-10-01

    To store anaesthetic records in computers, anaesthetists usually input data while still wearing dirty wet gloves. No studies have explored computer contamination in the operating room (OR) or anaesthetists' awareness of the importance of handwashing or hand hygiene. We investigated four components of keyboard contamination: (1) degree of contamination, (2) effect of cleaning with ethyl alcohol, (3) bacterial transmission between gloves and keyboards by tapping keys, and (4) frequency of anaesthetists' performing hand hygiene. Most of the bacteria on keyboards were coagulase-negative staphylococci and Bacillus spp.; however, meticillin-resistant Staphylococcus aureus was also found. Cleaning keyboards with ethyl alcohol effectively reduced bacterial counts. Wet contaminated gloves and keyboards transmitted meticillin-susceptible Staphylococcus epidermidis from one to the other more readily than dry contaminated gloves and keyboards. Only 17% of anaesthetists performed hand hygiene before anaesthesia, although 64% or 69% of anaesthetists performed hand hygiene after anaesthesia or before lunch. To prevent cross-contamination, keyboards should be routinely cleaned according to the manufacturer's instructions and disinfected once daily, or, when visibly soiled with blood or secretions. Moreover, anaesthetists should be aware that they could spread microbes that might cause healthcare-associated infection in the OR. Anaesthetists should perform hand hygiene before and after anaesthesia and remove gloves after each procedure and before using the computer.

  8. Determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation

    DOEpatents

    Blocksome, Michael A [Rochester, MN

    2011-12-20

    Methods, apparatus, and products are disclosed for determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation that includes, for each compute node in the set: initializing a barrier counter with no counter underflow interrupt; configuring, upon entering the barrier operation, the barrier counter with a value in dependence upon a number of compute nodes in the set; broadcasting, by a DMA engine on the compute node to each of the other compute nodes upon entering the barrier operation, a barrier control packet; receiving, by the DMA engine from each of the other compute nodes, a barrier control packet; modifying, by the DMA engine, the value for the barrier counter in dependence upon each of the received barrier control packets; exiting the barrier operation if the value for the barrier counter matches the exit value.

  9. Electrochemical cell operation and system

    DOEpatents

    Maru, Hansraj C.

    1980-03-11

    Thermal control in fuel cell operation is affected through sensible heat of process gas by providing common input manifolding of the cell gas flow passage in communication with the cell electrolyte and an additional gas flow passage which is isolated from the cell electrolyte and in thermal communication with a heat-generating surface of the cell. Flow level in the cell gas flow passage is selected based on desired output electrical energy and flow level in the additional gas flow passage is selected in accordance with desired cell operating temperature.

  10. Unmanned Surface Vehicle Human-Computer Interface for Amphibious Operations

    DTIC Science & Technology

    2013-08-01

    FIGURES Figure 1. MOCU Baseline HCI using Both Aerial Photo and Digital Nautical Chart ( DNC ) Maps to Control and Monitor Land, Sea, and Air...Action DNC Digital Nautical Chart FNC Future Naval Capability HCI Human-Computer Interface HRI Human-Robot Interface HSI Human-Systems Integration...Digital Nautical Chart ( DNC ) Maps to Control and Monitor Land, Sea, and Air Vehicles. 3.2 BASELINE MOCU HCI The Baseline MOCU interface is a tiled

  11. Computer controlled vent and pressurization system

    NASA Technical Reports Server (NTRS)

    Cieslewicz, E. J.

    1975-01-01

    The Centaur space launch vehicle airborne computer, which was primarily used to perform guidance, navigation, and sequencing tasks, was further used to monitor and control inflight pressurization and venting of the cryogenic propellant tanks. Computer software flexibility also provided a failure detection and correction capability necessary to adopt and operate redundant hardware techniques and enhance the overall vehicle reliability.

  12. Protected quantum computing: interleaving gate operations with dynamical decoupling sequences.

    PubMed

    Zhang, Jingfu; Souza, Alexandre M; Brandao, Frederico Dias; Suter, Dieter

    2014-02-07

    Implementing precise operations on quantum systems is one of the biggest challenges for building quantum devices in a noisy environment. Dynamical decoupling attenuates the destructive effect of the environmental noise, but so far, it has been used primarily in the context of quantum memories. Here, we experimentally demonstrate a general scheme for combining dynamical decoupling with quantum logical gate operations using the example of an electron-spin qubit of a single nitrogen-vacancy center in diamond. We achieve process fidelities >98% for gate times that are 2 orders of magnitude longer than the unprotected dephasing time T2.

  13. An Augmented Reality System for Military Operations in Urban Terrain

    DTIC Science & Technology

    2002-12-05

    24061 ABSTRACT Many future military operations are expected to occur in urban environments. These complex, 3D battlefields intro- duce many...from ISIM, France, and an M.S. in computer science in 1999 from the University of Central Florida. His research interests are in computer graphics, 3D ...information needed and reduce it to a minimum during high-stress situations. The Shared Information Database The system contains a detailed 3D

  14. Overreaction to External Attacks on Computer Systems Could Be More Harmful than the Viruses Themselves.

    ERIC Educational Resources Information Center

    King, Kenneth M.

    1988-01-01

    Discussion of the recent computer virus attacks on computers with vulnerable operating systems focuses on the values of educational computer networks. The need for computer security procedures is emphasized, and the ethical use of computer hardware and software is discussed. (LRW)

  15. REAL TIME SYSTEM OPERATIONS 2006-2007

    SciTech Connect

    Eto, Joseph H.; Parashar, Manu; Lewis, Nancy Jo

    2008-08-15

    The Real Time System Operations (RTSO) 2006-2007 project focused on two parallel technical tasks: (1) Real-Time Applications of Phasors for Monitoring, Alarming and Control; and (2) Real-Time Voltage Security Assessment (RTVSA) Prototype Tool. The overall goal of the phasor applications project was to accelerate adoption and foster greater use of new, more accurate, time-synchronized phasor measurements by conducting research and prototyping applications on California ISO's phasor platform - Real-Time Dynamics Monitoring System (RTDMS) -- that provide previously unavailable information on the dynamic stability of the grid. Feasibility assessment studies were conducted on potential application of this technology for small-signal stability monitoring, validating/improving existing stability nomograms, conducting frequency response analysis, and obtaining real-time sensitivity information on key metrics to assess grid stress. Based on study findings, prototype applications for real-time visualization and alarming, small-signal stability monitoring, measurement based sensitivity analysis and frequency response assessment were developed, factory- and field-tested at the California ISO and at BPA. The goal of the RTVSA project was to provide California ISO with a prototype voltage security assessment tool that runs in real time within California ISO?s new reliability and congestion management system. CERTS conducted a technical assessment of appropriate algorithms, developed a prototype incorporating state-of-art algorithms (such as the continuation power flow, direct method, boundary orbiting method, and hyperplanes) into a framework most suitable for an operations environment. Based on study findings, a functional specification was prepared, which the California ISO has since used to procure a production-quality tool that is now a part of a suite of advanced computational tools that is used by California ISO for reliability and congestion management.

  16. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  17. SD-CAS: Spin Dynamics by Computer Algebra System.

    PubMed

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples.

  18. Deepen the Teaching Reform of Operating System, Cultivate the Comprehensive Quality of Students

    ERIC Educational Resources Information Center

    Liu, Jianjun

    2010-01-01

    Operating system is the core course of the specialty of computer science and technology. To understand and master the operating system will directly affect students' further study on other courses. The course of operating system focuses more on theories. Its contents are more abstract and the knowledge system is more complicated. Therefore,…

  19. A Management System for Computer Performance Evaluation.

    DTIC Science & Technology

    1981-12-01

    1 Software . . . . . . . . . . . ............. Interaction; . . . . . . . . ............... 27 III. Design of a CPE Management...SEAFAC Workload. . . .............. SE WAC Computer H .ard.......... . . 57 SEAFAC Computer Software . . . . . . . . . . . . . 64 Summary...system hard--e/ software . It is a team that can either use or learn to use the tools and techniques of computer performance evaluation. The make-up of such

  20. An information management and communications system for emergency operations

    SciTech Connect

    Gladden, C.A.; Doyle, J.F.

    1995-09-01

    In the mid 1980s the US Department of Energy (DOE) recognized the need to dramatically expand its Emergency Operations Centers to deal with the large variety of emergencies for which DOE has an obligation to manage, or provide technical support. This paper describes information management, display, and communications systems that have been implemented at the DOE Headquarters Forestall Operations Center (OC), DOE Operations Offices, and critical laboratory locations. Major elements of the system at the DOE Headquarters facility include computer control, information storage and retrieval, processing, Local Area Networks (LANs), videoconferencing, video display, and audio systems. These Headquarters systems are linked by Wide Area Networks (WANs) to similar systems at the Operations Office and critical Laboratory locations.

  1. Resource requirements for digital computations on electrooptical systems

    NASA Astrophysics Data System (ADS)

    Eshaghian, Mary M.; Panda, Dhabaleswar K.; Kumar, V. K. Prasanna

    1991-03-01

    The resource requirements of electrooptical organizations in performing digital computing tasks are studied via a generic model of parallel computation using optical interconnects, called the 'optical model of computation' (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Relationships between information transfer and computational resources in solving a given problem are derived. A computationally intensive operation, two-dimensional digital image convolution is undertaken. Irrespective of the input/output scheme and the order of computation, a lower bound of Omega(nw) is obtained on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  2. Self-pacing direct memory access data transfer operations for compute nodes in a parallel computer

    DOEpatents

    Blocksome, Michael A

    2015-02-17

    Methods, apparatus, and products are disclosed for self-pacing DMA data transfer operations for nodes in a parallel computer that include: transferring, by an origin DMA on an origin node, a RTS message to a target node, the RTS message specifying an message on the origin node for transfer to the target node; receiving, in an origin injection FIFO for the origin DMA from a target DMA on the target node in response to transferring the RTS message, a target RGET descriptor followed by a DMA transfer operation descriptor, the DMA descriptor for transmitting a message portion to the target node, the target RGET descriptor specifying an origin RGET descriptor on the origin node that specifies an additional DMA descriptor for transmitting an additional message portion to the target node; processing, by the origin DMA, the target RGET descriptor; and processing, by the origin DMA, the DMA transfer operation descriptor.

  3. Representation of feedback operators for hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1995-01-01

    We consider the problem of obtaining integral representation of feedback operators for damped hyperbolic control systems. We show that for the wave equation with Kelvin-Voigt damping and non-compact input operator, the feedback gain operator is Hilbert-Schmidt. This result is then used to provide an explicit integral representation for the feedback operator in terms of functional gains. Numerical results are given to illustrate the role that damping plays in the smoothness of these gains.

  4. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  5. B190 computer controlled radiation monitoring and safety interlock system

    SciTech Connect

    Espinosa, D L; Fields, W F; Gittins, D E; Roberts, M L

    1998-08-01

    The Center for Accelerator Mass Spectrometry (CAMS) in the Earth and Environmental Sciences Directorate at Lawrence Livermore National Laboratory (LLNL) operates two accelerators and is in the process of installing two new additional accelerators in support of a variety of basic and applied measurement programs. To monitor the radiation environment in the facility in which these accelerators are located and to terminate accelerator operations if predetermined radiation levels are exceeded, an updated computer controlled radiation monitoring system has been installed. This new system also monitors various machine safety interlocks and again terminates accelerator operations if machine interlocks are broken. This new system replaces an older system that was originally installed in 1988. This paper describes the updated B190 computer controlled radiation monitoring and safety interlock system.

  6. LHCb Conditions database operation assistance systems

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  7. Controlled unknown quantum operations on hybrid systems

    NASA Astrophysics Data System (ADS)

    He, Yong; Luo, Ming-Xing

    2016-12-01

    Any unknown unitary operations conditioned on a control system can be deterministically performed if ancillary subspaces are available for the target systems [Zhou X Q, et al. 2011 Nat. Commun. 2 413]. In this paper, we show that previous optical schemes may be extended to general hybrid systems if unknown operations are provided by optical instruments. Moreover, a probabilistic scheme is proposed when the unknown operation may be performed on the subspaces of ancillary high-dimensional systems. Furthermore, the unknown operations conditioned on the multi-control system may be reduced to the case with a control system using additional linear circuit complexity. The new schemes may be more flexible for different systems or hybrid systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 61303039 and 61201253), Chunying Fellowship, and Fundamental Research Funds for the Central Universities, China (Grant No. 2682014CX095).

  8. Systemic Operational Design: An Introduction

    DTIC Science & Technology

    2005-05-26

    the observer or the agent. Systems adapt over time to try to take advantage of the changing environment. Nobel Laureate Murray Gell - Mann outlined...less likely to result in a system so developing as to render its opponents useless. 100 Murray Gell - Mann “The Simple and the Complex,” (Edited by...Air Power Studies Monograph, Maxwell Air Force Base, Alabama, February 1995. Gell - Mann , Murray . “Simple and the Complex.” Edited by David S. Alberts

  9. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  10. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  11. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  12. 47 CFR 32.2220 - Operator systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Operator systems. 32.2220 Section 32.2220 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2220 Operator...

  13. Multitasking Operating Systems for the IBM PC.

    ERIC Educational Resources Information Center

    Owen, G. Scott

    1985-01-01

    The ability of a microcomputer to execute several programs at the same time is called "multitasking." The nature and use of one multitasking operating system Concurrent PC-DOS from Digital Research (the developers of the CP/M operating system) are discussed. (JN)

  14. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  15. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  16. Windows XP Operating System Security Analysis

    DTIC Science & Technology

    2002-09-01

    organizations. The purpose of this research is to determine if Windows XP, when used as a workstation operating system in domain- based networks, provides...... research is to determine if Windows XP, when used as a workstation operating system in domain based networks, provides adequate security policy

  17. A Framework for Adaptable Operating and Runtime Systems

    SciTech Connect

    Sterling, Thomas

    2014-03-04

    The emergence of new classes of HPC systems where performance improvement is enabled by Moore’s Law for technology is manifest through multi-core-based architectures including specialized GPU structures. Operating systems were originally designed for control of uniprocessor systems. By the 1980s multiprogramming, virtual memory, and network interconnection were integral services incorporated as part of most modern computers. HPC operating systems were primarily derivatives of the Unix model with Linux dominating the Top-500 list. The use of Linux for commodity clusters was first pioneered by the NASA Beowulf Project. However, the rapid increase in number of cores to achieve performance gain through technology advances has exposed the limitations of POSIX general-purpose operating systems in scaling and efficiency. This project was undertaken through the leadership of Sandia National Laboratories and in partnership of the University of New Mexico to investigate the alternative of composable lightweight kernels on scalable HPC architectures to achieve superior performance for a wide range of applications. The use of composable operating systems is intended to provide a minimalist set of services specifically required by a given application to preclude overheads and operational uncertainties (“OS noise”) that have been demonstrated to degrade efficiency and operational consistency. This project was undertaken as an exploration to investigate possible strategies and methods for composable lightweight kernel operating systems towards support for extreme scale systems.

  18. Computer Microvision for Microelectromechanical Systems (MEMS)

    DTIC Science & Technology

    2003-11-01

    AFRL-IF-RS-TR-2003-270 Final Technical Report November 2003 COMPUTER MICROVISION FOR MICROELECTROMECHANICAL SYSTEMS (MEMS...May 97 – Jun 03 4. TITLE AND SUBTITLE COMPUTER MICROVISION FOR MICROELECTROMECHANICAL SYSTEMS (MEMS) 6. AUTHOR(S) Dennis M. Freeman 5...developed a patented multi-beam interferometric method for imaging MEMS, launched a collaborative Computer Microvision Remote Test Facility using DARPA’s

  19. Small Aircraft Transportation System, Higher Volume Operations Concept: Normal Operations

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.; Jones, Kenneth M.; Consiglio, Maria C.; Williams, Daniel M.; Adams, Catherine A.

    2004-01-01

    This document defines the Small Aircraft Transportation System (SATS), Higher Volume Operations (HVO) concept for normal conditions. In this concept, a block of airspace would be established around designated non-towered, non-radar airports during periods of poor weather. Within this new airspace, pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft. Using onboard equipment and procedures, they would then approach and land at the airport. Departures would be handled in a similar fashion. The details for this operational concept are provided in this document.

  20. Computer Literacy in a Distance Education System

    ERIC Educational Resources Information Center

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  1. Computer-Controlled, Motorized Positioning System

    NASA Technical Reports Server (NTRS)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1994-01-01

    Computer-controlled, motorized positioning system developed for use in robotic manipulation of samples in custom-built secondary-ion mass spectrometry (SIMS) system. Positions sample repeatably and accurately, even during analysis in three linear orthogonal coordinates and one angular coordinate under manual local control, or microprocessor-based local control or remote control by computer via general-purpose interface bus (GPIB).

  2. The structure of the clouds distributed operating system

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data

  3. Biomolecular computing systems: principles, progress and potential.

    PubMed

    Benenson, Yaakov

    2012-06-12

    The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.

  4. Knowledge systems support for mission operations automation

    NASA Astrophysics Data System (ADS)

    Atkinson, David J.

    1990-10-01

    A knowledge system which utilizes artificial intelligence technology to automate a subset of real time mission operations functions is described. An overview of spacecraft telecommunications operations at the Jet Propulsion Laboratories (JPL) highlights requirements for automation. The knowledge system, called the Spacecraft Health Automated Reasoning Prototype (SHARP), developed to explore methods for automated health and status analysis is outlined. The advantages of the system were demonstrated during the spacecraft's encounter with the planet Neptune. The design of the fault detection and diagnosis portions of SHARP is discussed. The performance of SHARP during the encounter is discussed along with issues and benefits arising from application of knowledge system to mission operations automation.

  5. Advanced valve motor operator diagnostic system

    SciTech Connect

    Thibault, C.

    1989-01-01

    A brief summary of the current use of diagnostic applications to motor-operated valves (MOVs) to satisfy the requirements of IE Bulletin 85-03, IE 85-03 (Supplement 1), and preventive maintenance applications is presented in this paper. This paper explains a new system for diagnostics, signature analysis, and direct measurement of actual load on MOV in the closed direction. This advanced valve motor operator diagnostic system (AVMODS) system comprises two complementary segments: (1) valve motor operator diagnostic system (V-MODS) and (2) motor current signature analysis (MCSA). AVMODS technical considerations regarding V-MODS and MCSA are discussed.

  6. Solid Waste Operations Complex (SWOC) Facilities Sprinkler System Hydraulic Calculations

    SciTech Connect

    KERSTEN, J.K.

    2003-07-11

    The attached calculations demonstrate sprinkler system operational water requirements as determined by hydraulic analysis. Hydraulic calculations for the waste storage buildings of the Central Waste Complex (CWC), T Plant, and Waste Receiving and Packaging (WRAP) facility are based upon flow testing performed by Fire Protection Engineers from the Hanford Fire Marshal's office. The calculations received peer review and approval prior to release. The hydraulic analysis program HASS Computer Program' (under license number 1609051210) is used to perform all analyses contained in this document. Hydraulic calculations demonstrate sprinkler system operability based upon each individual system design and available water supply under the most restrictive conditions.

  7. Study of the modifications needed for efficient operation of NASTRAN on the Control Data Corporation STAR-100 computer

    NASA Technical Reports Server (NTRS)

    1975-01-01

    NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.

  8. Computers as Augmentative Communication Systems.

    ERIC Educational Resources Information Center

    Vanderheiden, Gregg C.

    The paper describes concepts and principles resulting in successful applications of computer technology to the needs of the disabled. The first part describes what a microcomputer is and is not, emphasizing the microcomputer as a machine that simply carries out instructions, the role of programming, and the use of prepared application programs.…

  9. Portability of Operating System Software.

    DTIC Science & Technology

    1981-06-01

    Pazal and ports nicely to P-code type machines SUcn as the Western Digital _I Pascal Microengine. Four feasibility studies were arried out ’.nder this...experiment involved studying the Concurrent Pascal compiler to ascertain the effort to generate P-code for the Western Digital . This took about four (4) months...systems could then be measured on both high-level language machines (such as the INTEL 432 (ADA) machine and the Western Digital ADA microengine) and on

  10. Surface Operations Systems Improve Airport Efficiency

    NASA Technical Reports Server (NTRS)

    2009-01-01

    With Small Business Innovation Research (SBIR) contracts from Ames Research Center, Mosaic ATM of Leesburg, Virginia created software to analyze surface operations at airports. Surface surveillance systems, which report locations every second for thousands of air and ground vehicles, generate massive amounts of data, making gathering and analyzing this information difficult. Mosaic?s Surface Operations Data Analysis and Adaptation (SODAA) tool is an off-line support tool that can analyze how well the airport surface operation is working and can help redesign procedures to improve operations. SODAA helps researchers pinpoint trends and correlations in vast amounts of recorded airport operations data.

  11. Architectural requirements for the Red Storm computing system.

    SciTech Connect

    Camp, William J.; Tomkins, James Lee

    2003-10-01

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  12. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung Fung

    1988-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  13. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung F.

    1990-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  14. Partitioning of regular computation on multiprocessor systems

    SciTech Connect

    Lee, F. . Computer Systems Lab.)

    1990-07-01

    Problem partitioning of regular computation over two-dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  15. Efficient O(N) recursive computation of the operational space inertial matrix

    SciTech Connect

    Lilly, K.W.; Orin, D.E.

    1993-09-01

    The operational space inertia matrix {Lambda} reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix {Lambda} also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute {Lambda} has a computational complexity of O(N{sup 3}) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing {Lambda} for N {>=} 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base.

  16. NASA Customer Data and Operations System

    NASA Technical Reports Server (NTRS)

    Butler, Madeline J.; Stallings, William H.

    1991-01-01

    In addition to the currently provided NASA services such as Communications and Tracking and Data Relay Satellite System services, the NASA's Customer Data and Operations System (CDOS) will provide the following services to the user: Data Delivery Service, Data Archive Service, and CDOS Operations Management Service. This paper describes these services in detail and presents respective block diagrams. The CDOS services will support a variety of multipurpose missions simultaneously with centralized and common hardware and software data-driven systems.

  17. Method for concurrent execution of primitive operations by dynamically assigning operations based upon computational marked graph and availability of data

    NASA Technical Reports Server (NTRS)

    Stoughton, John W. (Inventor); Mielke, Roland V. (Inventor)

    1990-01-01

    Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.

  18. Computer Bits: The Ideal Computer System for Your Center.

    ERIC Educational Resources Information Center

    Brown, Dennis; Neugebauer, Roger

    1986-01-01

    Reviews five computer systems that can address the needs of a child care center: (1) Sperry PC IT with Bernoulli Box, (2) Compaq DeskPro 286, (3) Macintosh Plus, (4) Epson Equity II, and (5) Leading Edge Model "D." (HOD)

  19. A System for Monitoring and Management of Computational Grids

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Biegel, Bryan (Technical Monitor)

    2002-01-01

    As organizations begin to deploy large computational grids, it has become apparent that systems for observation and control of the resources, services, and applications that make up such grids are needed. Administrators must observe the operation of resources and services to ensure that they are operating correctly and they must control the resources and services to ensure that their operation meets the needs of users. Users are also interested in the operation of resources and services so that they can choose the most appropriate ones to use. In this paper we describe a prototype system to monitor and manage computational grids and describe the general software framework for control and observation in distributed environments that it is based on.

  20. Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.

  1. Study of operational parameters impacting helicopter fuel consumption. [using computer techniques (computer programs)

    NASA Technical Reports Server (NTRS)

    Cross, J. L.; Stevens, D. D.

    1976-01-01

    A computerized study of operational parameters affecting helicopter fuel consumption was conducted as an integral part of the NASA Civil Helicopter Technology Program. The study utilized the Helicopter Sizing and Performance Computer Program (HESCOMP) developed by the Boeing-Vertol Company and NASA Ames Research Center. An introduction to HESCOMP is incorporated in this report. The results presented were calculated using the NASA CH-53 civil helicopter research aircraft specifications. Plots from which optimum flight conditions for minimum fuel use that can be obtained are presented for this aircraft. The results of the study are considered to be generally indicative of trends for all helicopters.

  2. The Operating System Jungle: Finding a Common Path Keeps Getting More Difficult.

    ERIC Educational Resources Information Center

    Pournelle, Jerry

    1984-01-01

    Describes the computer field before the advent of CP/M (Control Program for Microcomputers), an operating system which facilitated compatibility between different computers. CP/M's functions and flaws and the advent of Apple DOS and UCSD Pascal, two additional widely used operating systems, and the significance of their development are also…

  3. Industrial information database service by personal computer network 'Saitamaken Industrial Information System'

    NASA Astrophysics Data System (ADS)

    Sugahara, Keiji

    Saitamaken Industrial Information System provides onlined database services, which does not rely on computers for the whole operation, but utilizes computers, optical disk files or facsimiles for certain operations as we think fit. It employes the method of providing information for various, outputs, that is, image information is sent from optical disk files to facsimiles, or other information is provided from computers to terminals as well as facsimiles. Locating computers as a core in the system, it enables integrated operations. The system at terminal side was developed separately with functions such as operation by turnkey style, down-loading of statistical information and the newest menu.

  4. Three computer codes to read, plot and tabulate operational test-site recorded solar data

    NASA Technical Reports Server (NTRS)

    Stewart, S. D.; Sampson, R. S., Jr.; Stonemetz, R. E.; Rouse, S. L.

    1980-01-01

    Computer programs used to process data that will be used in the evaluation of collector efficiency and solar system performance are described. The program, TAPFIL, reads data from an IBM 360 tape containing information (insolation, flowrates, temperatures, etc.) from 48 operational solar heating and cooling test sites. Two other programs, CHPLOT and WRTCNL, plot and tabulate the data from the direct access, unformatted TAPFIL file. The methodology of the programs, their inputs, and their outputs are described.

  5. Establishing performance requirements of computer based systems subject to uncertainty

    SciTech Connect

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  6. Performing a global barrier operation in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

  7. Category-theoretic models of algebraic computer systems

    NASA Astrophysics Data System (ADS)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  8. Operation of staged membrane oxidation reactor systems

    SciTech Connect

    Repasky, John Michael

    2012-10-16

    A method of operating a multi-stage ion transport membrane oxidation system. The method comprises providing a multi-stage ion transport membrane oxidation system with at least a first membrane oxidation stage and a second membrane oxidation stage, operating the ion transport membrane oxidation system at operating conditions including a characteristic temperature of the first membrane oxidation stage and a characteristic temperature of the second membrane oxidation stage; and controlling the production capacity and/or the product quality by changing the characteristic temperature of the first membrane oxidation stage and/or changing the characteristic temperature of the second membrane oxidation stage.

  9. System security in the space flight operations center

    NASA Technical Reports Server (NTRS)

    Wagner, David A.

    1988-01-01

    The Space Flight Operations Center is a networked system of workstation-class computers that will provide ground support for NASA's next generation of deep-space missions. The author recounts the development of the SFOC system security policy and discusses the various management and technology issues involved. Particular attention is given to risk assessment, security plan development, security implications of design requirements, automatic safeguards, and procedural safeguards.

  10. Utilization of Computer Technology in the Third World: An Evaluation of Computer Operations at the University of Honduras.

    ERIC Educational Resources Information Center

    Shermis, Mark D.

    This report of the results of an evaluation of computer operations at the University of Honduras (Universidad Nacional Autonoma de Honduras) begins by discussing the problem--i.e., poor utilization of the campus mainframe computer--and listing the hardware and software available in the computer center. Data collection methods are summarized,…

  11. Rule-based approach to operating system selection: RMS vs. UNIX

    SciTech Connect

    Phifer, M.S.; Sadlowe, A.R.; Emrich, M.L.; Gadagkar, H.P.

    1988-10-01

    A rule-based system is under development for choosing computer operating systems. Following a brief historical account, this paper compares and contrasts the essential features of two operating systems highlighting particular applications. ATandT's UNIX System and Datapoint Corporations's Resource Management System (RMS) are used as illustrative examples. 11 refs., 3 figs.

  12. Analyzing the security of an existing computer system

    NASA Technical Reports Server (NTRS)

    Bishop, M.

    1986-01-01

    Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.

  13. MTA Computer Based Evaluation System.

    ERIC Educational Resources Information Center

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  14. VLT Data Flow System Begins Operation

    NASA Astrophysics Data System (ADS)

    1999-06-01

    their proposed observations and provide accurate estimates of the amount of telescope time they will need to complete their particular scientific programme. Once the proposals have been reviewed by the OPC and telescope time is awarded by the ESO management according to the recommendation by this Committee, the successful astronomers begin to assemble detailed descriptions of their intended observations (e.g. position in the sky, time and duration of the observation, the instrument mode, etc.) in the form of computer files called Observation Blocks (OBs) . The software to make OBs is distributed by ESO and used by the astronomers at their home institutions to design their observing programs well before the observations are scheduled at the telescope. The OBs can then be directly executed by the VLT and result in an increased efficiency in the collection of raw data (images, spectra) from the science instruments on the VLT. The activation (execution) of OBs can be done by the astronomer at the telescope on a particular set of dates ( visitor mode operation) or it can be done by ESO science operations astronomers at times which are optimally suited for the particular scientific programme ( service mode operation). An enormous VLT Data Archive ESO PR Photo 25b/99 ESO PR Photo 25b/99 [Preview - JPEG: 400 x 465 pix - 160k] [Normal - JPEG: 800 x 929 pix - 568k] [High-Res - JPEG: 3000 x 3483 pix - 5.5M] Caption to ESO PR Photo 25b/99 : The first of several DVD storage robot at the VLT Data Archive at the ESO headquarters include 1100 DVDs (with a total capacity of about 16 Terabytes) that may be rapidly accessed by the archive software system, ensuring fast availbility of the requested data. The raw data generated at the telescope are stored by an archive system that sends these data regularly back to ESO headquarters in Garching (Germany) in the form of CD and DVD ROM disks. While the well-known Compact Disks (CD ROMs) store about 600 Megabytes (600,000,000 bytes) each, the

  15. Computing an operating parameter of a unified power flow controller

    DOEpatents

    Wilson, David G; Robinett, III, Rush D

    2015-01-06

    A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.

  16. ESPC Computational Efficiency of Earth System Models

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Computational Efficiency of Earth System Models...00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE ESPC Computational Efficiency of Earth System Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...optimization in this system. 3 Figure 1 – Plot showing seconds per forecast day wallclock time for a T639L64 (~21 km at the equator) NAVGEM

  17. Transition Operators Assigned to Physical Systems

    NASA Astrophysics Data System (ADS)

    Chajda, Ivan; Paseka, Jan

    2016-12-01

    By a physical system we recognize a set of propositions about a given system with their truth values depending on the states of the system. Since every physical system can go from one state to another one, there exists a binary relation on the set of states describing this transition. Our aim is to assign to every such system an operator on the set of propositions which is fully determined by the mentioned relation. We establish conditions under which the given relation can be recovered by means of this transition operator.

  18. A distributed telerobotics system for space operations

    NASA Technical Reports Server (NTRS)

    Wise, James D.; Ciscon, Lawrence A.; Graves, Sean

    1992-01-01

    Robotic systems for space operations will require a combination of teleoperation, closely supervised autonomy, and loosely supervised autonomy. They may involve multiple robots, multiple controlling sites, and long communication delays. We have constructed a distributed telerobotics system as a framework for studying these problems. Our system is based on a modular interconnection scheme which allows the components of either manual or autonomous control systems to communicate and share information. It uses a wide area network to connect robots and operators at several different sites. This presentation will describe the structure of our system, the components used in our configurations, and results of some of our teleoperation experiments.

  19. Computer-Based Medical System

    NASA Technical Reports Server (NTRS)

    1998-01-01

    SYMED, Inc., developed a unique electronic medical records and information management system. The S2000 Medical Interactive Care System (MICS) incorporates both a comprehensive and interactive medical care support capability and an extensive array of digital medical reference materials in either text or high resolution graphic form. The system was designed, in cooperation with NASA, to improve the effectiveness and efficiency of physician practices. The S2000 is a MS (Microsoft) Windows based software product which combines electronic forms, medical documents, records management, and features a comprehensive medical information system for medical diagnostic support and treatment. SYMED, Inc. offers access to its medical systems to all companies seeking competitive advantages.

  20. Computer program determines chemical composition of physical system at equilibrium

    NASA Technical Reports Server (NTRS)

    Kwong, S. S.

    1966-01-01

    FORTRAN 4 digital computer program calculates equilibrium composition of complex, multiphase chemical systems. This is a free energy minimization method with solution of the problem reduced to mathematical operations, without concern for the chemistry involved. Also certain thermodynamic properties are determined as byproducts of the main calculations.

  1. Chandrasekhar equations and computational algorithms for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Ito, K.; Powers, R. K.

    1984-01-01

    The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.

  2. Selecting and Implementing the Right Computer System.

    ERIC Educational Resources Information Center

    Evancoe, Donna Clark

    1985-01-01

    Steps that should be followed in choosing and implementing an administrative computer system are discussed. Three stages are involved: institutional assessment, system selection, and implementation. The first step is to define the current status of the data processing systems and the management information systems at the institutions. Future…

  3. Public Address Systems. Specifications - Installation - Operation.

    ERIC Educational Resources Information Center

    Palmer, Fred M.

    Provisions for public address in new construction of campus buildings (specifications, installations, and operation of public address systems), are discussed in non-technical terms. Consideration is given to microphones, amplifiers, loudspeakers and the placement and operation of various different combinations. (FS)

  4. [Computer-assisted monitoring systems. Use of computer networks and internet technologies].

    PubMed

    Flerov, E V; Sablin, I N; Broĭtman, O G; Tolmachev, V A; Batchaev, Sh S

    2005-01-01

    The automated workplace (AWP) of anesthesiologist developed by the early 1990s provided data collection and processing, viewing of all monitors, and printing of anesthesiological chart (AC). AWP is a subject of continuous modification and adaptation to variable conditions. Computer monitoring including various measuring devises equipped with series interface RS-232 was implemented in Russian Research Center for Surgery. Rapid progress in computer network technologies made it necessary to adapt AWP to operation in computer networks. Since 1999 the computer network has been connected to the Internet. The use of computer technologies, including Internet, provides remote access to AC, thereby providing conditions for remote monitoring. AWP of anesthesiologist can be regarded as an automated control system of the patient state operated by anesthesiologist. Specific features of data processing in AWP are described. The AWP system is planed to be multiprocessor with distributed data flow. The suggested structure of computer network system for surgery rooms meeting the requirements of WWW-technology connected to the Internet is a promising approach to remote monitoring in medicine.

  5. Intelligent Command and Control Systems for Satellite Ground Operations

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1999-01-01

    This grant, Intelligent Command and Control Systems for Satellite Ground Operations, funded by NASA Goddard Space Flight Center, has spanned almost a decade. During this time, it has supported a broad range of research addressing the changing needs of NASA operations. It is important to note that many of NASA's evolving needs, for example, use of automation to drastically reduce (e.g., 70%) operations costs, are similar requirements in both government and private sectors. Initially the research addressed the appropriate use of emerging and inexpensive computational technologies, such as X Windows, graphics, and color, together with COTS (commercial-off-the-shelf) hardware and software such as standard Unix workstations to re-engineer satellite operations centers. The first phase of research supported by this grant explored the development of principled design methodologies to make effective use of emerging and inexpensive technologies. The ultimate performance measures for new designs were whether or not they increased system effectiveness while decreasing costs. GT-MOCA (The Georgia Tech Mission Operations Cooperative Associate) and GT-VITA (Georgia Tech Visual and Inspectable Tutor and Assistant), whose latter stages were supported by this research, explored model-based design of collaborative operations teams and the design of intelligent tutoring systems, respectively. Implemented in proof-of-concept form for satellite operations, empirical evaluations of both, using satellite operators for the former and personnel involved in satellite control operations for the latter, demonstrated unequivocally the feasibility and effectiveness of the proposed modeling and design strategy underlying both research efforts. The proof-of-concept implementation of GT-MOCA showed that the methodology could specify software requirements that enabled a human-computer operations team to perform without any significant performance differences from the standard two-person satellite

  6. Hybrid Systems: Computation and Control.

    DTIC Science & Technology

    2007-11-02

    Universit5.t Berlin Forschungsgruppe Softwaretechnik (Sekr. FR5-6) Franklinstr. 28/29 D-10587 Berlin, Germany e -mail: friesen @cs.tu-berlin.de Abstract. The...DOCUMENTATION PAGE- ... ... _ _ApproedOMB No. 0704-188 )~csna"~ b as for gV* 00"Won oaf "Wsmn~gg e ist Ipne ftaai W1 -t 0Ioft dWakphBU~MSe~s ejsig t...University of California at Berkeley Department of Electrical Engineering and Computer Sciences Berkeley, CA 94720, USA E -mail: { tah,sastry

  7. Computation of Weapons Systems Effectiveness

    DTIC Science & Technology

    2013-09-01

    Deflection Compute Adjusted REP/DEP and CEP Obtain Ballistic Partials from Zero- Drag Trajectory Program σx- Harp Anglet, σx-Slant Range, σVx-aircraft...The last method is to take the harp angle of the weapon as the impact angle to cater for the scenario where the weapon flies directly to the...target upon weapon release as laser guidance is available throughout its flight. The harp angle is the line-of-sight (LOS) angle between the aircraft and

  8. Smart Grid - Transforming Power System Operations

    SciTech Connect

    Widergren, Steven E.; Kirkham, Harold

    2010-04-28

    Abstract—Electric power systems are entering a new realm of operations. Large amounts of variable generation tax our ability to reliably operate the system. Couple this with a greater reliance on the electricity network to serve consumer demand that is likely to rise significantly even as we drive for greater efficiency. Trade-offs between energy and environmental needs will be constantly negotiated, while a reliable supply of electricity needs even greater assurance in a world where threats of disruption have risen. Smart grid capabilities are being proposed to help address the challenges confronting system operations. This paper reviews the impact of smart grid functionality on transforming power system operations. It explores models for distributed energy resources (DER – generation, storage, and load) that are appearing on the system. It reviews the evolving nature of electricity markets to deal with this complexity and a change of emphasis on signals from these markets to affect power system control. Smart grid capabilities will also impact reliable operations, while cyber security issues must be addressed as a culture change that influences all system design, implementation, and maintenance. Lastly, the paper explores significant questions for further research and the need for a simulation environment that supports such investigation and informs deployments to mitigate operational issues as they arise.

  9. Nuclear Materials Identification System Operational Manual

    SciTech Connect

    Chiang, L.G.

    2001-04-10

    This report describes the operation and setup of the Nuclear Materials Identification System (NMIS) with a {sup 252}Cf neutron source at the Oak Ridge Y-12 Plant. The components of the system are described with a description of the setup of the system along with an overview of the NMIS measurements for scanning, calibration, and confirmation of inventory items.

  10. Computer simulation of breathing systems for divers

    SciTech Connect

    Sexton, P.G.; Nuckols, M.L.

    1983-02-01

    A powerful new tool for the analysis and design of underwater breathing gas systems is being developed. A versatile computer simulator is described which makes possible the modular ''construction'' of any conceivable breathing gas system from computer memory-resident components. The analysis of a typical breathing gas system is demonstrated using this simulation technique, and the effects of system modifications on performance of the breathing system are shown. This modeling technique will ultimately serve as the foundation for a proposed breathing system simulator under development by the Navy. The marriage of this computer modeling technique with an interactive graphics system will provide the designer with an efficient, cost-effective tool for the development of new and improved diving systems.

  11. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  12. PCOS - An operating system for modular applications

    NASA Technical Reports Server (NTRS)

    Tharp, V. P.

    1986-01-01

    This paper is an introduction to the PCOS operating system for the MC68000 family processors. Topics covered are: development history; development support; rational for development of PCOS and salient characteristics; architecture; and a brief comparison of PCOS to UNIX.

  13. Operational reliability of standby safety systems

    SciTech Connect

    Grant, G.M.; Atwood, C.L.; Gentillon, C.D.

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) is evaluating the operational reliability of several risk-significant standby safety systems based on the operating experience at US commercial nuclear power plants from 1987 through 1993. The reliability assessed is the probability that the system will perform its Probabilistic Risk Assessment (PRA) defined safety function. The quantitative estimates of system reliability are expected to be useful in risk-based regulation. This paper is an overview of the analysis methods and the results of the high pressure coolant injection (HPCI) system reliability study. Key characteristics include (1) descriptions of the data collection and analysis methods, (2) the statistical methods employed to estimate operational unreliability, (3) a description of how the operational unreliability estimates were compared with typical PRA results, both overall and for each dominant failure mode, and (4) a summary of results of the study.

  14. Parametric Identification of Systems Via Linear Operators.

    DTIC Science & Technology

    1978-09-01

    A general parametric identification /approximation model is developed for the black box identification of linear time invariant systems in terms of... parametric identification techniques derive from the general model as special cases associated with a particular linear operator. Some possible

  15. Computer Sciences and Data Systems, volume 1

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  16. Experimental analysis of computer system dependability

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar, K.; Tang, Dong

    1993-01-01

    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.

  17. Valve operating system for an automotive engine

    SciTech Connect

    Maeda, S.

    1988-03-15

    In a valve operating system for an automotive engine having two or more intake valves for one cylinder, a camshaft having cams for operating the intake valves, the system is described comprising: one of the cams being rotatably and axially slidably mounted on the camshaft; clutch means for engaging the slidable cam with the camshaft at a predetermined angular position; shifting means for axially shifting the slidable cam to engage the cam with the camshaft by the clutch means.

  18. Determining the optimal operator allocation in SME's food manufacturing company using computer simulation and data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Rahman, Asmahanim Ab

    2014-09-01

    In a labor intensive manufacturing system, optimal operator allocation is one of the most important decisions in determining the efficiency of the system. In this paper, ten operator allocation alternatives are identified using the computer simulation ARENA. Two inputs; average wait time and average cycle time and two outputs; average operator utilization and total packet values of each alternative are generated. Four Data Envelopment Analysis (DEA) models; CCR, BCC, MCDEA and AHP/DEA are used to determine the optimal operator allocation at one of the SME food manufacturing companies in Selangor. The results of all four DEA models showed that the optimal operator allocation is six operators at peeling process, three operators at washing and slicing process, three operators at frying process and two operators at packaging process.

  19. Computer Programs for Library Operations; Results of a Survey Conducted Between Fall 1971 and Spring 1972.

    ERIC Educational Resources Information Center

    Liberman, Eva; And Others

    Many library operations involving large data banks lend themselves readily to computer operation. In setting up library computer programs, in changing or expanding programs, cost in programming and time delays could be substantially reduced if the programmers had access to library computer programs being used by other libraries, providing similar…

  20. An integrated compact airborne multispectral imaging system using embedded computer

    NASA Astrophysics Data System (ADS)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  1. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A design tradeoff study is reported for a modular spaceborne computer system that is responsive to many mission types and phases. The computer uses redundancy to maximize reliability, and multiprocessing to maximize processing capacity. Fault detection and recovery features provide optimal reliability.

  2. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A Central Control Element (CCE) module which controls the Automatically Reconfigurable Modular System (ARMS) and allows both redundant processing and multi-computing in the same computer with real time mode switching, is discussed. The same hardware is used for either reliability enhancement, speed enhancement, or for a combination of both.

  3. Uninterruptible Power Systems: Operational and Cost Considerations.

    DTIC Science & Technology

    1977-03-01

    earn a profit . It is therefore inappropriate to express costs of computer disruption in terms of business lost, spoilage of in-process materials, or...the direct cost-assignment approach is inappropriate since the Department of Defense is not in business to earn a profit . Disruptions of service and...York Stock Exchange, ins Wnce companies, petroleum chemical plants, rubber and plastics industries, whose operations (and profits ) depend upon *Dranetz

  4. Performing four basic arithmetic operations with spiking neural P systems.

    PubMed

    Zeng, Xiangxiang; Song, Tao; Zhang, Xingyi; Pan, Linqiang

    2012-12-01

    Recently, Gutiérrez-Naranjo and Leporati considered performing basic arithmetic operations on a new class of bio-inspired computing devices-spiking neural P systems (for short, SN P systems). However, the binary encoding mechanism used in their research looks like the encoding approach in electronic circuits, instead of the style of spiking neurons (in usual SN P systems, information is encoded as the time interval between spikes). In this work, four SN P systems are constructed as adder, subtracter, multiplier, and divider, respectively. In these systems, a number is inputted to the system as the interval of time elapsed between two spikes received by input neuron, the result of a computation is the time between the moments when the output neuron spikes.

  5. Computer Programmed Milling Machine Operations. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Leonard, Dennis

    This learning module for a high school metals and manufacturing course is designed to introduce the concept of computer-assisted machining (CAM). Through it, students learn how to set up and put data into the controller to machine a part. They also become familiar with computer-aided manufacturing and learn the advantages of computer numerical…

  6. Some experiences with BEPCII SRF system operation

    NASA Astrophysics Data System (ADS)

    Huang, Tong-ming; Lin, Hai-ying; Sun, Yi; Dai, Jian-ping; Wang, Guang-wei; Pan, Wei-min Li, Zhong-quan; Ma, Qiang; Wang, Qun-yao; Zhao, Guang-yuan; Mi, Zheng-hui; Sha, Peng

    2016-06-01

    The Superconducting Radio Frequency (SRF) system of the upgrade project of the Beijing Electron Positron Collider (BEPCII) has been in operation for almost 8 years. During operation, many problems have been encountered, such as excessive heating of the power couplers, frequent beam trips during high intensity colliding, false arc interlock trigger and so on. Among them, some has been solved successfully, some have been alleviated. This paper will describe some experiences with BEPCII SRF system operation, including the symptoms, causes and solutions of problems.

  7. Data systems and computer science programs: Overview

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  8. Method and Apparatus Providing Deception and/or Altered Operation in an Information System Operating System

    DOEpatents

    Cohen, Fred; Rogers, Deanna T.; Neagoe, Vicentiu

    2008-10-14

    A method and/or system and/or apparatus providing deception and/or execution alteration in an information system. In specific embodiments, deceptions and/or protections are provided by intercepting and/or modifying operation of one or more system calls of an operating system.

  9. Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.

    2001-01-01

    The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.

  10. Interactive orbital proximity operations planning system instruction and training guide

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1994-01-01

    This guide instructs users in the operation of a Proximity Operations Planning System. This system uses an interactive graphical method for planning fuel-efficient rendezvous trajectories in the multi-spacecraft environment of the space station and allows the operator to compose a multi-burn transfer trajectory between orbit initial chaser and target trajectories. The available task time (window) of the mission is predetermined and the maneuver is subject to various operational constraints, such as departure, arrival, spatial, plume impingement, and en route passage constraints. The maneuvers are described in terms of the relative motion experienced in a space station centered coordinate system. Both in-orbital plane as well as out-of-orbital plane maneuvering is considered. A number of visual optimization aids are used for assisting the operator in reaching fuel-efficient solutions. These optimization aids are based on the Primer Vector theory. The visual feedback of trajectory shapes, operational constraints, and optimization functions, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool is an example of operator-assisted optimization of nonlinear cost functions.

  11. Computer-Assisted Instruction Authoring Systems

    ERIC Educational Resources Information Center

    Dean, Peter M.

    1978-01-01

    Authoring systems are defined as tools used by an educator to translate intents and purposes from his head into a computer program. Alternate ways of preparing code are examined and charts of these coding formats are displayed. (Author/RAO)

  12. Computed Tomography of the Musculoskeletal System.

    PubMed

    Ballegeer, Elizabeth A

    2016-05-01

    Computed tomography (CT) has specific uses in veterinary species' appendicular musculoskeletal system. Parameters for acquisition of images, interpretation limitations, as well as published information regarding its use in small animals is reviewed.

  13. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where

  14. Automated Operations Development for Advanced Exploration Systems

    NASA Technical Reports Server (NTRS)

    Haddock, Angie T.; Stetson, Howard

    2012-01-01

    Automated space operations command and control software development and its implementation must be an integral part of the vehicle design effort. The software design must encompass autonomous fault detection, isolation, recovery capabilities and also provide "single button" intelligent functions for the crew. Development, operations and safety approval experience with the Timeliner system onboard the International Space Station (ISS), which provided autonomous monitoring with response and single command functionality of payload systems, can be built upon for future automated operations as the ISS Payload effort was the first and only autonomous command and control system to be in continuous execution (6 years), 24 hours a day, 7 days a week within a crewed spacecraft environment. Utilizing proven capabilities from the ISS Higher Active Logic (HAL) System, along with the execution component design from within the HAL 9000 Space Operating System, this design paper will detail the initial HAL System software architecture and interfaces as applied to NASA's Habitat Demonstration Unit (HDU) in support of the Advanced Exploration Systems, Autonomous Mission Operations project. The development and implementation of integrated simulators within this development effort will also be detailed and is the first step in verifying the HAL 9000 Integrated Test-Bed Component [2] designs effectiveness. This design paper will conclude with a summary of the current development status and future development goals as it pertains to automated command and control for the HDU.

  15. Computer Systems and Services in Hospitals—1979

    PubMed Central

    Veazie, Stephen M.

    1979-01-01

    Starting at the end of 1978 and continuing through the first six months of 1979, the American Hospital Association (AHA) collected information on computer systems and services used in/by hospitals. The information has been compiled into the most comprehensive data base of hospital computer systems and services in existence today. Summaries of the findings of this project will be presented in this paper.

  16. Computer Systems for Distributed and Distance Learning.

    ERIC Educational Resources Information Center

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  17. Cognitive context detection in UAS operators using eye-gaze patterns on computer screens

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.

  18. Operating health analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Fotuhi-Firuzabad, Mahmud

    The required level of operating reserve to be maintained by an electric power system can be determined using both deterministic and probabilistic techniques. Despite the obvious disadvantages of deterministic approaches there is still considerable reluctance to apply probabilistic techniques due to the difficulty of interpreting a single numerical risk index and the lack of sufficient information provided by a single index. A practical way to overcome difficulties is to embed deterministic considerations in the probabilistic indices in order to monitor the system well-being. The system well-being can be designated as healthy, marginal and at risk. The concept of system well-being is examined and extended in this thesis to cover the overall area of operating reserve assessment. Operating reserve evaluation involves the two distinctly different aspects of unit commitment and the dispatch of the committed units. Unit commitment health analysis involves the determination of which unit should be committed to satisfy the operating criteria. The concepts developed for unit commitment health, margin and risk are extended in this thesis to evaluate the response well-being of a generating system. A procedure is presented to determine the optimum dispatch of the committed units to satisfy the response criteria. The impact on the response wellbeing being of variations in the margin time, required regulating margin and load forecast uncertainty are illustrated. The effects on the response well-being of rapid start units, interruptible loads and postponable outages are also illustrated. System well-being is, in general, greatly improved by interconnection with other power systems. The well-being concepts are extended to evaluate the spinning reserve requirements in interconnected systems. The interconnected system unit commitment problem is decomposed into two subproblems in which unit scheduling is performed in each isolated system followed by interconnected system evaluation

  19. Computer analyses for the design, operation and safety of new isotope production reactors: A technology status review

    SciTech Connect

    Wulff, W.

    1990-01-01

    A review is presented on the currently available technologies for nuclear reactor analyses by computer. The important distinction is made between traditional computer calculation and advanced computer simulation. Simulation needs are defined to support the design, operation, maintenance and safety of isotope production reactors. Existing methods of computer analyses are categorized in accordance with the type of computer involved in their execution: micro, mini, mainframe and supercomputers. Both general and special-purpose computers are discussed. Major computer codes are described, with regard for their use in analyzing isotope production reactors. It has been determined in this review that conventional systems codes (TRAC, RELAP5, RETRAN, etc.) cannot meet four essential conditions for viable reactor simulation: simulation fidelity, on-line interactive operation with convenient graphics, high simulation speed, and at low cost. These conditions can be met by special-purpose computers (such as the AD100 of ADI), which are specifically designed for high-speed simulation of complex systems. The greatest shortcoming of existing systems codes (TRAC, RELAP5) is their mismatch between very high computational efforts and low simulation fidelity. The drift flux formulation (HIPA) is the viable alternative to the complicated two-fluid model. No existing computer code has the capability of accommodating all important processes in the core geometry of isotope production reactors. Experiments are needed (heat transfer measurements) to provide necessary correlations. It is important for the nuclear community, both in government, industry and universities, to begin to take advantage of modern simulation technologies and equipment. 41 refs.

  20. Recursive dynamics for flexible multibody systems using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1990-01-01

    Due to their structural flexibility, spacecraft and space manipulators are multibody systems with complex dynamics and possess a large number of degrees of freedom. Here the spatial operator algebra methodology is used to develop a new dynamics formulation and spatially recursive algorithms for such flexible multibody systems. A key feature of the formulation is that the operator description of the flexible system dynamics is identical in form to the corresponding operator description of the dynamics of rigid multibody systems. A significant advantage of this unifying approach is that it allows ideas and techniques for rigid multibody systems to be easily applied to flexible multibody systems. The algorithms use standard finite-element and assumed modes models for the individual body deformation. A Newton-Euler Operator Factorization of the mass matrix of the multibody system is first developed. It forms the basis for recursive algorithms such as for the inverse dynamics, the computation of the mass matrix, and the composite body forward dynamics for the system. Subsequently, an alternative Innovations Operator Factorization of the mass matrix, each of whose factors is invertible, is developed. It leads to an operator expression for the inverse of the mass matrix, and forms the basis for the recursive articulated body forward dynamics algorithm for the flexible multibody system. For simplicity, most of the development here focuses on serial chain multibody systems. However, extensions of the algorithms to general topology flexible multibody systems are described. While the computational cost of the algorithms depends on factors such as the topology and the amount of flexibility in the multibody system, in general, it appears that in contrast to the rigid multibody case, the articulated body forward dynamics algorithm is the more efficient algorithm for flexible multibody systems containing even a small number of flexible bodies. The variety of algorithms described

  1. Selected computer system controls at the Energy Information Administration

    SciTech Connect

    Not Available

    1991-09-01

    The purpose of our review of the Energy Information Administration's (EIA) computer system was to evaluate disk and tape information storage and the adequacy of internal controls in the operating system programs. We used a set of computer-assisted audit techniques called CAATS, developed by the US Department of Transportation, Office of Inspector General, in performing the review at the EIA Forrestal Computer Facility. Improved procedures are needed to assure more efficient use of disk space. By transferring data sets from disk to tape, deleting invalid data, releasing unused reserve space and blocking data efficiently, disk space with an estimated value of $1.1 million a year could be recovered for current use. Also, procedures governing the maximum times for storage of information on tapes should be enforced to help ensure that data is not lost. In addition, improved internal controls are needed over granting users system-wide privileges and over authorized program library names to prevent unauthorized access to the system and possible destruction or manipulation of data. Automated Data Processing (ADP) Services Staff officials indicated that software maintenance was not current, due to contractual difficulties with the operating contractor for the Forrestal Facility. Our review confirmed that improvements were needed to help prevent malfunctions of the operating system, which could cause performance degradations, system failures, or loss of either system or user data. Management generally concurred with the recommendations in the report.

  2. Computer-Aided dispatching system design specification

    SciTech Connect

    Briggs, M.G.

    1996-05-03

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility.

  3. Interactive orbital proximity operations planning system

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1989-01-01

    An interactive, graphical proximity operations planning system was developed which allows on-site design of efficient, complex, multiburn maneuvers in the dynamic multispacecraft environment about the space station. Maneuvering takes place in, as well as out of, the orbital plane. The difficulty in planning such missions results from the unusual and counterintuitive character of relative orbital motion trajectories and complex operational constraints, which are both time varying and highly dependent on the mission scenario. This difficulty is greatly overcome by visualizing the relative trajectories and the relative constraints in an easily interpretable, graphical format, which provides the operator with immediate feedback on design actions. The display shows a perspective bird's-eye view of the space station and co-orbiting spacecraft on the background of the station's orbital plane. The operator has control over two modes of operation: (1) a viewing system mode, which enables him or her to explore the spatial situation about the space station and thus choose and frame in on areas of interest; and (2) a trajectory design mode, which allows the interactive editing of a series of way-points and maneuvering burns to obtain a trajectory which complies with all operational constraints. Through a graphical interactive process, the operator will continue to modify the trajectory design until all operational constraints are met. The effectiveness of this display format in complex trajectory design is presently being evaluated in an ongoing experimental program.

  4. The Initial Development of a Computerized Operator Support System

    SciTech Connect

    Roger Lew; Ronald L Boring; Thomas A Ulrich; Ken Thomas

    2014-08-01

    A computerized operator support system (COSS) is a collection of resilient software technologies to assist operators in monitoring overall nuclear power plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. The COSS provides rapid assessments, computations, and recommendations to reduce workload and augment operator judgment and decision-making during fast- moving, complex events. A prototype COSS for a chemical volume control system at a nuclear power plant has been developed in order to demonstrate the concept and provide a test bed for further research. The development process identified four underlying elements necessary for the prototype, which consist of a digital alarm system, computer-based procedures, piping and instrumentation diagram system representations, and a recommender module for mitigation actions. An operational prototype resides at the Idaho National Laboratory (INL) using the U.S. Department of Energy’s (DOE) Light Water Reactor Sustainability (LWRS) Human Systems Simulation Laboratory (HSSL). Several human-machine interface (HMI) considerations are identified and incorporated in the prototype during this initial round of development.

  5. OCCUPATIONS IN ELECTRONIC COMPUTING SYSTEMS.

    ERIC Educational Resources Information Center

    Bureau of Employment Security (DOL), Washington, DC.

    OCCUPATIONAL INFORMATION FOR USE IN THE PLACEMENT AND COUNSELING SERVICES OF THE AFFILIATED STATE EMPLOYMENT SERVICES IS PRESENTED IN THIS BROCHURE, ESENTIALLY AN UPDATING OF "OCCUPATIONS IN ELECTRONIC DATA-PROCESSING SYSTEMS," PUBLISHED IN 1959. JOB ANALYSES PROVIDED THE PRIMARY SOURCE OF DATA, BUT ADDITIONAL INFORMATION AND DATA WERE OBTAINED…

  6. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    SciTech Connect

    Frankel, R.S.

    1995-12-31

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation.

  7. Current and Future Flight Operating Systems

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan

    2007-01-01

    This viewgraph presentation reviews the current real time operating system (RTOS) type in use with current flight systems. A new RTOS model is described, i.e. the process model. Included is a review of the challenges of migrating from the classic RTOS to the Process Model type.

  8. The Launch Systems Operations Cost Model

    NASA Technical Reports Server (NTRS)

    Prince, Frank A.; Hamaker, Joseph W. (Technical Monitor)

    2001-01-01

    One of NASA's primary missions is to reduce the cost of access to space while simultaneously increasing safety. A key component, and one of the least understood, is the recurring operations and support cost for reusable launch systems. In order to predict these costs, NASA, under the leadership of the Independent Program Assessment Office (IPAO), has commissioned the development of a Launch Systems Operations Cost Model (LSOCM). LSOCM is a tool to predict the operations & support (O&S) cost of new and modified reusable (and partially reusable) launch systems. The requirements are to predict the non-recurring cost for the ground infrastructure and the recurring cost of maintaining that infrastructure, performing vehicle logistics, and performing the O&S actions to return the vehicle to flight. In addition, the model must estimate the time required to cycle the vehicle through all of the ground processing activities. The current version of LSOCM is an amalgamation of existing tools, leveraging our understanding of shuttle operations cost with a means of predicting how the maintenance burden will change as the vehicle becomes more aircraft like. The use of the Conceptual Operations Manpower Estimating Tool/Operations Cost Model (COMET/OCM) provides a solid point of departure based on shuttle and expendable launch vehicle (ELV) experience. The incorporation of the Reliability and Maintainability Analysis Tool (RMAT) as expressed by a set of response surface model equations gives a method for estimating how changing launch system characteristics affects cost and cycle time as compared to today's shuttle system. Plans are being made to improve the model. The development team will be spending the next few months devising a structured methodology that will enable verified and validated algorithms to give accurate cost estimates. To assist in this endeavor the LSOCM team is part of an Agency wide effort to combine resources with other cost and operations professionals to

  9. A Look at Computer-Assisted Testing Operations. The Illinois Series on Educational Application of Computers, No. 12e.

    ERIC Educational Resources Information Center

    Muiznieks, Viktors; Dennis, J. Richard

    In computer assisted test construction (CATC) systems, the computer is used to perform the mechanical aspects of testing while the teacher retains control over question content. Advantages of CATC systems include question banks, decreased importance of test item security, computer analysis and response to student test answers, item analysis…

  10. Advances in coiled-tubing operating systems

    SciTech Connect

    Sas-Jaworsky, A. II

    1997-06-01

    The expansion of coiled tubing (CT) applications into spooled flowlines, spooled completions, and CT drilling continues to grow at an accelerated rate. For many users within the oil and gas industry, the CT industry appears to be poised on the threshold of the next logical step in its evolution, the creation of a fully integrated operating system. However, for CT to evolve into such an operating system, the associated services must be robust and sufficiently reliable to support the needs of exploration, development drilling, completion, production management, and wellbore-retirement operations both technically and economically. The most critical hurdle to overcome in creating a CT-based operating system is a fundamental understanding of the operating scope and physical limitations of CT technology. The complete list of mechanisms required to advance CT into an operating system is large and complex. However, a few key issues (such as formal education, training, standardization, and increased levels of experience) can accelerate the transition. These factors are discussed.

  11. An operator's views on Fermilab's control system

    NASA Astrophysics Data System (ADS)

    Baddorf, Debra S.

    1986-06-01

    A Fermilab accelerator operator presents views and personal opinions on the control system there. The paper covers features contributing to ease of use and comprehension, as well as a few things that could be improved. Included are such hardware as the trackball and interrupt button, the touch sensitive TV screen, the color Lexidata display, and black and white and color hardcopy capabilities. It also covers the software such as the generic parameter page, the generic plot package, and prepared displays. The alarm system is discussed from an operations standpoint, and also the datalogging system.

  12. Smart Operations in Distributed Energy Resources System

    NASA Astrophysics Data System (ADS)

    Wei, Li; Jie, Shu; Zhang-XianYong; Qing, Zhou

    Smart grid capabilities are being proposed to help solve the challenges concerning system operations due to that the trade-offs between energy and environmental needs will be constantly negotiated while a reliable supply of electricity needs even greater assurance in case of that threats of disruption have risen. This paper mainly explores models for distributed energy resources system (DG, storage, and load),and also reviews the evolving nature of electricity markets to deal with this complexity and a change of emphasis on signals from these markets to affect power system control. Smart grid capabilities will also impact reliable operations, while cyber security issues must be solved as a culture change that influences all system design, implementation, and maintenance. Lastly, the paper explores significant questions for further research and the need for a simulation environment that supports such investigation and informs deployments to mitigate operational issues as they arise.

  13. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  14. Towards real-time decentralized operating systems for ballistic missile defense

    SciTech Connect

    Van Tilborg, A.M.

    1982-01-01

    To satisfy the data processing needs of future ballistic missile defense systems, the US Army's ballistic missile defense advanced technology center is sponsoring extensive research on the subject of parallel computers. Both loosely coupled and tightly coupled machines consisting of numerous microcomputer processing elements are being evaluated for use in endoatmospheric, exoatmospheric, and space-based BMD systems. For various reasons, it is important that these parallel computers operate under the control of decentralized operating systems. This paper reports on the current status of research to develop decentralized operating systems for parallel computers used in ballistic missile defense. 24 references.

  15. Optimizing System Compute and Bandwidth Density for Deployed HPEC Applications

    DTIC Science & Technology

    2007-11-02

    Optimizing System Compute and Bandwidth Density for Deployed HPEC Applications Randy Banton and Richard Jaenicke Mercury Computer Systems, Inc...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Mercury Computer Systems, Inc. 8. PERFORMING ORGANIZATION REPORT NUMBER 9... Mercury Computer Systems, Inc. Optimizing System Compute Density for Deployed HPEC Applications Randy Banton, Director, Defense Electronics Engineering

  16. Displacement measurement system for inverters using computer micro-vision

    NASA Astrophysics Data System (ADS)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  17. Interactive orbital proximity operations planning system

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1988-01-01

    An interactive graphical proximity operations planning system was developed, which allows on-site design of efficient, complex, multiburn maneuvers in a dynamic multispacecraft environment. Maneuvering takes place in and out of the orbital plane. The difficulty in planning such missions results from the unusual and counterintuitive character of orbital dynamics and complex time-varying operational constraints. This difficulty is greatly overcome by visualizing the relative trajectories and the relevant constraints in an easily interpretable graphical format, which provides the operator with immediate feedback on design actions. The display shows a perspective bird's-eye view of a Space Station and co-orbiting spacecraft on the background of the Station's orbital plane. The operator has control over the two modes of operation: a viewing system mode, which enables the exporation of the spatial situation about the Space Station and thus the ability to choose and zoom in on areas of interest; and a trajectory design mode, which allows the interactive editing of a series of way points and maneuvering burns to obtain a trajectory that complies with all operational constraints. A first version of this display was completed. An experimental program is planned in which operators will carry out a series of design missions which vary in complexity and constraints.

  18. Space transportation system biomedical operations support study

    NASA Technical Reports Server (NTRS)

    White, S. C.

    1983-01-01

    The shift of the Space Transportation System (STS) flight tests of the orbiter vehicle to the preparation and flight of the payloads is discussed. Part of this change is the transition of the medical and life sciences aspects of the STS flight operations to reflect the new state. The medical operations, the life sciences flight experiments support requirements and the intramural research program expected to be at KSC during the operational flight period of the STS and a future space station are analyzed. The adequacy of available facilities, plans, and resources against these future needs are compared; revisions and/or alternatives where appropriate are proposed.

  19. Differential Characteristics and Methods of Operation Underlying CAI/CMI Drill and Practice Systems.

    ERIC Educational Resources Information Center

    Hativa, Nira

    1988-01-01

    Describes computer systems that combine drill and practice instruction with computer-managed instruction (CMI) and identifies system characteristics in four categories: (1) hardware, (2) software, (3) management systems, and (4) methods of daily operation. Topics discussed include microcomputer networks, graphics, feedback, degree of learner…

  20. A development environment for operational concepts and systems engineering analysis.

    SciTech Connect

    Raybourn, Elaine Marie; Senglaub, Michael E.

    2004-03-01

    The work reported in this document involves a development effort to provide combat commanders and systems engineers with a capability to explore and optimize system concepts that include operational concepts as part of the design effort. An infrastructure and analytic framework has been designed and partially developed that meets a gap in systems engineering design for combat related complex systems. The system consists of three major components: The first component consists of a design environment that permits the combat commander to perform 'what-if' types of analyses in which parts of a course of action (COA) can be automated by generic system constructs. The second component consists of suites of optimization tools designed to integrate into the analytical architecture to explore the massive design space of an integrated design and operational space. These optimization tools have been selected for their utility in requirements development and operational concept development. The third component involves the design of a modeling paradigm for the complex system that takes advantage of functional definitions and the coupled state space representations, generic measures of effectiveness and performance, and a number of modeling constructs to maximize the efficiency of computer simulations. The system architecture has been developed to allow for a future extension in which the operational concept development aspects can be performed in a co-evolutionary process to ensure the most robust designs may be gleaned from the design space(s).

  1. Modelling of the District Heating System's Operation

    NASA Astrophysics Data System (ADS)

    Vigants, Girts; Blumberga, Dagnija; Vīgants, Ģirts; Blumberga, Dagnija

    2011-01-01

    The development of a district heating systems calculation model means improvement in the energy efficiency of a district heating system, which makes it possible to reduce the heat losses, thus positively affecting the tariffs on thermal energy. In this paper, a universal approach is considered, based on which the optimal flow and temperature conditions in a district heating system network could be calculated. The optimality is determined by the least operational costs. The developed calculation model has been tested on the Ludza district heating system based on the technical parameters of this system.

  2. Some Steps towards Intelligent Computer Tutoring Systems.

    ERIC Educational Resources Information Center

    Tchogovadze, Gotcha G.

    1986-01-01

    Describes one way of structuring an intelligent tutoring system (ITS) in light of developments in artificial intelligence. A specialized intelligent operating system (SIOS) is proposed for software for a network of microcomputers, and it is postulated that a general learning system must be used as a basic framework for the SIOS. (Author/LRW)

  3. Dynamic self-assembly in living systems as computation.

    SciTech Connect

    Bouchard, Ann Marie; Osbourn, Gordon Cecil

    2004-06-01

    Biochemical reactions taking place in living systems that map different inputs to specific outputs are intuitively recognized as performing information processing. Conventional wisdom distinguishes such proteins, whose primary function is to transfer and process information, from proteins that perform the vast majority of the construction, maintenance, and actuation tasks of the cell (assembling and disassembling macromolecular structures, producing movement, and synthesizing and degrading molecules). In this paper, we examine the computing capabilities of biological processes in the context of the formal model of computing known as the random access machine (RAM) [Dewdney AK (1993) The New Turing Omnibus. Computer Science Press, New York], which is equivalent to a Turing machine [Minsky ML (1967) Computation: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs, NJ]. When viewed from the RAM perspective, we observe that many of these dynamic self-assembly processes - synthesis, degradation, assembly, movement - do carry out computational operations. We also show that the same computing model is applicable at other hierarchical levels of biological systems (e.g., cellular or organism networks as well as molecular networks). We present stochastic simulations of idealized protein networks designed explicitly to carry out a numeric calculation. We explore the reliability of such computations and discuss error-correction strategies (algorithms) employed by living systems. Finally, we discuss some real examples of dynamic self-assembly processes that occur in living systems, and describe the RAM computer programs they implement. Thus, by viewing the processes of living systems from the RAM perspective, a far greater fraction of these processes can be understood as computing than has been previously recognized.

  4. Theoretical kinetic computations in complex reacting systems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1986-01-01

    Nasa Lewis' studies of complex reacting systems at high temperature are discussed. The changes which occur are the result of many different chemical reactions occurring at the same time. Both an experimental and a theoretical approach are needed to fully understand what happens in these systems. The latter approach is discussed. The differential equations which describe the chemical and thermodynamic changes are given. Their solution by numerical techniques using a detailed chemical mechanism is described. Several different comparisons of computed results with experimental measurements are also given. These include the computation of (1) species concentration profiles in batch and flow reactions, (2) rocket performance in nozzle expansions, and (3) pressure versus time profiles in hydrocarbon ignition processes. The examples illustrate the use of detailed kinetic computations to elucidate a chemical mechanism and to compute practical quantities such as rocket performance, ignition delay times, and ignition lengths in flow processes.

  5. Constructing Stylish Characters on Computer Graphics Systems.

    ERIC Educational Resources Information Center

    Goldman, Gary S.

    1980-01-01

    Computer graphics systems typically produce a single, machine-like character font. At most, these systems enable the user to (1) alter the aspect ratio (height-to-width ratio) of the characters, (2) specify a transformation matrix to slant the characters, and (3) define a virtual pen table to change the lineweight of the plotted characters.…

  6. Honeywell Modular Automation System Computer Software Documentation

    SciTech Connect

    CUNNINGHAM, L.T.

    1999-09-27

    This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-211 and vertical denitration calciner in HC-230C-2.

  7. A New Computer-Based Examination System.

    ERIC Educational Resources Information Center

    Los Arcos, J. M.; Vano, E.

    1978-01-01

    Describes a computer-managed instructional system used to formulate, print, and evaluate true-false questions for testing purposes. The design of the system and its application in medical and nuclear engineering courses in two Spanish institutions of higher learning are detailed. (RAO)

  8. Terrace Layout Using a Computer Assisted System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  9. [Controlling systems for operating room managers].

    PubMed

    Schüpfer, G; Bauer, M; Scherzinger, B; Schleppers, A

    2005-08-01

    Management means developing, shaping and controlling of complex, productive and social systems. Therefore, operating room managers also need to develop basic skills in financial and managerial accounting as a basis for operative and strategic controlling which is an essential part of their work. A good measurement system should include financial and strategic concepts for market position, innovation performance, productivity, attractiveness, liquidity/cash flow and profitability. Since hospitals need to implement a strategy to reach their business objectives, the performance measurement system has to be individually adapted to the strategy of the hospital. In this respect the navigation system developed by Gälweiler is compared to the "balanced score card" system of Kaplan and Norton.

  10. Operational development of small plant growth systems

    NASA Technical Reports Server (NTRS)

    Scheld, H. W.; Magnuson, J. W.; Sauer, R. L.

    1986-01-01

    The results of a study undertaken on the first phase of an empricial effort in the development of small plant growth chambers for production of salad type vegetables on space shuttle or space station are discussed. The overall effort is visualized as providing the underpinning of practical experience in handling of plant systems in space which will provide major support for future efforts in planning, design, and construction of plant-based (phytomechanical) systems for support of human habitation in space. The assumptions underlying the effort hold that large scale phytomechanical habitability support systems for future space stations must evolve from the simple to the complex. The highly complex final systems will be developed from the accumulated experience and data gathered from repetitive tests and trials of fragments or subsystems of the whole in an operational mode. These developing system components will, meanwhile, serve a useful operational function in providing psychological support and diversion for the crews.

  11. IBM PC/IX operating system evaluation plan

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Granier, Martin; Hall, Philip P.; Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation plan for the IBM PC/IX Operating System designed for IBM PC/XT computers is discussed. The evaluation plan covers the areas of performance measurement and evaluation, software facilities available, man-machine interface considerations, networking, and the suitability of PC/IX as a development environment within the University of Southwestern Louisiana NASA PC Research and Development project. In order to compare and evaluate the PC/IX system, comparisons with other available UNIX-based systems are also included.

  12. Scientific computation systems quality branch manual

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A manual is presented which is designed to familiarize the GE 635 user with the configuration and operation of the overall system. Work submission, programming standards, restrictions, testing and debugging, and related general information is provided for GE 635 programmer.

  13. Determination of eigenvalues of dynamical systems by symbolic computation

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1982-01-01

    A symbolic computation technique for determining the eigenvalues of dynamical systems is described wherein algebraic operations, symbolic differentiation, matrix formulation and inversion, etc., can be performed on a digital computer equipped with a formula-manipulation compiler. An example is included that demonstrates the facility with which the system dynamics matrix and the control distribution matrix from the state space formulation of the equations of motion can be processed to obtain eigenvalue loci as a function of a system parameter. The example chosen to demonstrate the technique is a fourth-order system representing the longitudinal response of a DC 8 aircraft to elevator inputs. This simplified system has two dominant modes, one of which is lightly damped and the other well damped. The loci may be used to determine the value of the controlling parameter that satisfied design requirements. The results were obtained using the MACSYMA symbolic manipulation system.

  14. Computer surety: computer system inspection guidance. [Contains glossary

    SciTech Connect

    Not Available

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  15. Distributed-Computer System Optimizes SRB Joints

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  16. System balance analysis for vector computers

    NASA Technical Reports Server (NTRS)

    Knight, J. C.; Poole, W. G., Jr.; Voight, R. G.

    1975-01-01

    The availability of vector processors capable of sustaining computing rates of 10 to the 8th power arithmetic results pers second raised the question of whether peripheral storage devices representing current technology can keep such processors supplied with data. By examining the solution of a large banded linear system on these computers, it was found that even under ideal conditions, the processors will frequently be waiting for problem data.

  17. Intelligent command and control systems for satellite ground operations

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1994-01-01

    The Georgia Tech portion of the Intelligent Control Center project includes several complementary activities. Two major activities entail thesis level research; the other activities are either support activities or preliminary explorations (e.g., task analyses) to support the research. The first research activity is the development of principles for the design of active interfaces to support monitoring during real-time supports. It is well known that as the operator's task becomes less active, i.e., more monitoring and less active control, there is concern that the operator will be less involved and less able to rapidly identify anomalous or failure situations. The research project to design active monitoring interfaces is an attempt to remediate this undesirable side-effect of increasingly automated control systems that still depend ultimately on operator supervision. The second research activity is the exploration of the use of case-based reasoning as a way to accumulate operator experience and make it available in computational form.

  18. Computer support for cooperative tasks in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Moore, Mike

    1994-01-01

    Traditionally, spacecraft management has been performed by fixed teams of operators in Mission Operations Centers. The team cooperatively: (1) ensures that payload(s) on spacecraft perform their work; and (2) maintains the health and safety of the spacecraft through commanding and monitoring the spacecraft's subsystems. In the future, the task demands will increase and overload the operators. This paper describes the traditional spacecraft management environment and describes a new concept in which groupware will be used to create a Virtual Mission Operations Center. Groupware tools will be used to better utilize available resources through increased automation and dynamic sharing of personnel among missions.

  19. Computer support for cooperative tasks in Mission Operations Centers

    SciTech Connect

    Fox, J.; Moore, M.

    1994-10-01

    Traditionally, spacecraft management has been performed by fixed teams of operators in Mission Operations Centers. The team cooperatively (1) ensures that payload(s) on spacecraft perform their work and (2) maintains the health and safety of the spacecraft through commanding and monitoring the spacecraft`s subsystems. In the future, the task demands will increase and overload the operators. This paper describes the traditional spacecraft management environment and describes a new concept in which groupware will be used to create a Virtual Mission Operations Center. Groupware tools will be used to better utilize available resources through increased automation and dynamic sharing of personnel among missions.

  20. Computing single step operators of logic programming in radial basis function neural networks

    NASA Astrophysics Data System (ADS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.