Science.gov

Sample records for computer operating systems

  1. Computational Challenges for Power System Operation

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu; Liu, Yan; Rice, Mark J.; Jin, Shuangshuang

    2012-02-06

    As the power grid technology evolution and information technology revolution converge, power grids are witnessing a revolutionary transition, represented by emerging grid technologies and large scale deployment of new sensors and meters in networks. This transition brings opportunities, as well as computational challenges in the field of power grid analysis and operation. This paper presents some research outcomes in the areas of parallel state estimation using the preconditioned conjugated gradient method, parallel contingency analysis with a dynamic load balancing scheme and distributed system architecture. Based on this research, three types of computational challenges are identified: highly coupled applications, loosely coupled applications, and centralized and distributed applications. Recommendations for future work for power grid applications are also presented.

  2. Software fault tolerance in computer operating systems

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  3. An operating system for future aerospace vehicle computer systems

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  4. Demonstrating Operating System Principles via Computer Forensics Exercises

    ERIC Educational Resources Information Center

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram

    2010-01-01

    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  5. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  6. Computing Operating Characteristics Of Bearing/Shaft Systems

    NASA Technical Reports Server (NTRS)

    Moore, James D.

    1996-01-01

    SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.

  7. Development and operation of PDX neutral beam computer system

    SciTech Connect

    Kozub, T.; Rossmassler, J.E.; Eubank, H.P.; Kugel, H.W.; Schilling, G.; von Halle, A.; Williams, M.D.

    1981-01-01

    The Poloidal Divertor Experiment (PDX) is a tokamak experiment designed to study impurity control through the use of magnetic divertors utilizing four neutral beams for heating. Each beamline is equipped with a 30 cm diameter ORNL source providing either 1.5 MW H/degree/or 2.0 MW D/degree/. The four neutral beam injectors have succeeded in reliably delivering 7 mega-watts of neutral beam power into PDX. The PDX neutral beam computer system supports the operation of the beamlines including ion sources and related diagnostics. A dedicated DEC PDP 11/34 computer is interfaced to the neutral beam components through a five crate CAMAC parallel/serial highway system.

  8. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  9. An Operational Computational Terminal Area PBL Prediction System

    NASA Technical Reports Server (NTRS)

    Lin, Yuh-Lang; Kaplan, Michael L.

    1998-01-01

    There are two fundamental goals of this research project which are listed here in terms of priority, i.e., a primary and secondary goal. The first and primary goal is to develop a prognostic system which could satisfy the operational weather prediction requirements of the meteorological subsystem within the Aircraft Vortex Spacing System (AVOSS), i.e., an operational computational Terminal Area PBL Prediction System (TAPPS). The second goal is to perform indepth diagnostic analyses of the meteorological conditions during the special wake vortex deployments at Memphis and Dallas during August 95 and September 97, respectively. These two goals are interdependent because a thorough understanding of the atmospheric dynamical processes which produced the unique meteorology during the Memphis and Dallas deployments will help us design a prognostic system for the planetary boundary layer (PBL) which could be utilized to support the meteorological subsystem within AVOSS. Concerning the primary goal, TAPPS Stage 2 was tested on the Memphis data and is about to be tested on the Dallas case studies. Furthermore benchmark tests have been undertaken to select the appropriate platform to run TAPPS in real time in support of the DFW AVOSS system. In addition, a technique to improve the initial data over the region surrounding Dallas was also tested and modified for potential operational use in TAPPS. The secondary goal involved several sensitivity simulations and comparisons to Memphis observational data sets in an effort to diagnose what specific atmospheric phenomena where occurring which may have impacted the dynamics of atmospheric wake vortices.

  10. Business/Office Occupations Data Processing--Data Processing Concepts, Data Entry Operator, Computer Operator, Computer Programmer, Systems Analyst.

    ERIC Educational Resources Information Center

    Tennessee State Dept. of Education, Nashville. Div. of Vocational-Technical Education.

    This data processing curriculum contains 23 units of instruction for an articulated program in the occupations of data processing. It consists of an introductory mini-series on data-processing concepts, as well as data entry operator, computer operator, programmer, and systems analyst units. Introductory materials include program goals and…

  11. The operation of large computer-controlled manufacturing systems

    SciTech Connect

    Upton, D.M.

    1988-01-01

    This work examines methods for operation of large computer-controlled manufacturing systems, with more than 50 or so disparate CNC machines in congregation. The central theme is the development of a distributed control system, which requires minimal central supervision, and allows manufacturing system re-configuration without extensive control software re-writes. Provision is made for machines to learn from their experience and provide estimates of the time necessary to effect various tasks. Routing is opportunistic, with varying degrees of myopia depending on the prevailing situation. Necessary curtailments of opportunism are built in to the system, in order to provide a society of machines that operate in unison rather than in chaos. Negotiation and contention resolution are carried out using a UHF radio communications network, along with processing capability on both pallets and tools. Graceful and robust error recovery is facilitated by ensuring adequate pessimistic consideration of failure modes at each stage in the scheme. Theoretical models are developed and an examination is made of fundamental characteristics of auction-based scheduling methods.

  12. Modeling emergency department operations using advanced computer simulation systems.

    PubMed

    Saunders, C E; Makens, P K; Leblanc, L J

    1989-02-01

    We developed a computer simulation model of emergency department operations using simulation software. This model uses multiple levels of preemptive patient priority; assigns each patient to an individual nurse and physician; incorporates all standard tests, procedures, and consultations; and allows patient service processes to proceed simultaneously, sequentially, repetitively, or a combination of these. Selected input data, including the number of physicians, nurses, and treatment beds, and the blood test turnaround time, then were varied systematically to determine their simulated effect on patient throughput time, selected queue sizes, and rates of resource utilization. Patient throughput time varied directly with laboratory service times and inversely with the number of physician or nurse servers. Resource utilization rates varied inversely with resource availability, and patient waiting time and patient throughput time varied indirectly with the level of patient acuity. The simulation can be animated on a computer monitor, showing simulated patients, specimens, and staff members moving throughout the ED. Computer simulation is a potentially useful tool that can help predict the results of changes in the ED system without actually altering it and may have implications for planning, optimizing resources, and improving the efficiency and quality of care.

  13. A Computer-Mediated Instruction System, Applied to Its Own Operating System and Peripheral Equipment.

    ERIC Educational Resources Information Center

    Winiecki, Roger D.

    Each semester students in the School of Health Sciences of Hunter College learn how to use a computer, how a computer system operates, and how peripheral equipment can be used. To overcome inadequate computer center services and equipment, programed subject matter and accompanying reference material were developed. The instructional system has a…

  14. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  15. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  16. A new taxonomy for distributed computer systems based upon operating system structure

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1985-01-01

    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.

  17. Operating Policies and Procedures of Computer Data-Base Systems.

    ERIC Educational Resources Information Center

    Anderson, David O.

    Speaking on the operating policies and procedures of computer data bases containing information on students, the author divides his remarks into three parts: content decisions, data base security, and user access. He offers nine recommended practices that should increase the data base's usefulness to the user community: (1) the cost of developing…

  18. Computer-aided operations engineering with integrated models of systems and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  19. Operating Systems.

    ERIC Educational Resources Information Center

    Denning, Peter J.; Brown, Robert L.

    1984-01-01

    A computer operating system spans multiple layers of complexity, from commands entered at a keyboard to the details of electronic switching. In addition, the system is organized as a hierarchy of abstractions. Various parts of such a system and system dynamics (using the Unix operating system as an example) are described. (JN)

  20. An Operational Computational Terminal Area PBL Prediction System

    NASA Technical Reports Server (NTRS)

    Lin, Yuh-Lang; Kaplan, Michael L.; Weglarz, Ronald P.; Hamilton, David W.

    1997-01-01

    There are two fundamental goals of this research project. The first and primary goal is to develop a prognostic system which could satisfy the operational weather prediction requirements of the meteorological subsystem within the Aircraft Vortex Spacing System (AVOSS). The secondary goal is to perform indepth diagnostic analyses of the meteorological conditions affecting the Memphis field experiment held during August 1995. These two goals are interdependent because a thorough understanding of the atmospheric dynamical processes which produced the unique meteorology during the Memphis deployment will help us design a prognostic system for the planetary boundary layer (PBL) which could be utilized to support the meteorological subsystem within AVOSS. The secondary goal occupied much of the first year of the research project. This involved extensive data acquisition and indepth analyses of a spectrum of atmospheric observational data sets. Concerning the primary goal, the first part of the four-stage prognostic system in support of AVOSS entitled: Terminal Area PBL Prediction System (TAPPS) was also formulated and tested in a research environment during 1996. We describe this system, and the three stages which are planned to follow. This first part of a software system designed to meet the primary goal of this research project is relatively inexpensive to implement and run operationally.

  1. Medical computing in the 1980s. Operating system and programming language issues.

    PubMed

    Greenes, R A

    1983-06-01

    Operating systems and programming languages differ widely in their suitability for particular applications. The diversity of medical computing needs demands a diversity of solutions. Compounding this diversity if the decentralization caused by evolution of local computing systems for local needs. Relevant current trends in computing include increased emphasis on decentralization, growing capabilities for interconnection of diverse systems, and development of common data base and file server capabilities. In addition, standardization and hardware in dependence of operating systems, as well as programming languages and development of programmerless systems, continue to widen application opportunities.

  2. General-purpose operating system kernel for a 32-bit computer system

    SciTech Connect

    Georg, D.D.; Osecky, B.D.; Scheid, S.D.

    1984-03-01

    The operating system kernel for the HP 9000 series 500 computers efficiently supports the real-time requirements of the extended BASIC language environment as well as the multiuser requirements of HP-UX. The kernel provides efficient support for multiple processors, a process model that supports a large user process virtual address space, a virtual memory system that supports both paged and segmented virtual memory, memory and buffer management, and a device-independent file system which has the capability of supporting multiple directory formats. The main objective of this operating system kernel, called SUN, is to provide a clean interface between the underlying hardware and the application-level systems such as BASIC or hp-UX. 1 reference.

  3. CP/M: A Family of 8- and 16-Bit Computer Operating Systems.

    ERIC Educational Resources Information Center

    Kildall, Gary

    1982-01-01

    Traces the development of the computer CP/M (Control Program for Microcomputers) and MP/M (Multiprogramming Monitor Microcomputers) operating system by Gary Kildall of Digital Research Company. Discusses the adaptation of these operating systems to the newly emerging 16 and 32 bit microprocessors. (Author/LC)

  4. Computer algebra and operators

    NASA Technical Reports Server (NTRS)

    Fateman, Richard; Grossman, Robert

    1989-01-01

    The symbolic computation of operator expansions is discussed. Some of the capabilities that prove useful when performing computer algebra computations involving operators are considered. These capabilities may be broadly divided into three areas: the algebraic manipulation of expressions from the algebra generated by operators; the algebraic manipulation of the actions of the operators upon other mathematical objects; and the development of appropriate normal forms and simplification algorithms for operators and their actions. Brief descriptions are given of the computer algebra computations that arise when working with various operators and their actions.

  5. System analysis for the Huntsville Operation Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.

    1986-01-01

    A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.

  6. System analysis for the Huntsville Operational Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Mauldin, J.

    1984-01-01

    The Huntsville Operations Support Center (HOSC) is a distributed computer system used to provide real time data acquisition, analysis and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.

  7. System Analysis for the Huntsville Operation Support Center, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Massey, D.

    1985-01-01

    HOSC as a distributed computing system, is responsible for data acquisition and analysis during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.

  8. The Relationship between Chief Information Officer Transformational Leadership and Computing Platform Operating Systems

    ERIC Educational Resources Information Center

    Anderson, George W.

    2010-01-01

    The purpose of this study was to relate the strength of Chief Information Officer (CIO) transformational leadership behaviors to 1 of 5 computing platform operating systems (OSs) that may be selected for a firm's Enterprise Resource Planning (ERP) business system. Research shows executive leader behaviors may promote innovation through the use of…

  9. Building a computer-aided design capability using a standard time share operating system

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1975-01-01

    The paper describes how an integrated system of engineering computer programs can be built using a standard commercially available operating system. The discussion opens with an outline of the auxiliary functions that an operating system can perform for a team of engineers involved in a large and complex task. An example of a specific integrated system is provided to explain how the standard operating system features can be used to organize the programs into a simple and inexpensive but effective system. Applications to an aircraft structural design study are discussed to illustrate the use of an integrated system as a flexible and efficient engineering tool. The discussion concludes with an engineer's assessment of an operating system's capabilities and desirable improvements.

  10. Computer-operated analytical platform for the determination of nutrients in hydroponic systems.

    PubMed

    Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier

    2014-03-15

    Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production.

  11. Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation

    SciTech Connect

    Engelmann, Christian

    2013-01-01

    Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

  12. Experiences with computer systems in blast furnace operation control at Rautaruukki

    SciTech Connect

    Inkala, P.; Karppinen, A. . Raahe Steel Works); Seppanen, M. )

    1994-09-01

    Low energy consumption, together with high productivity and stable blast furnace operation, has been achieved at Rautaruukki's Raahe Steel Works as a result of the efficient use of computer technology in process control and improvements in raw materials quality. The blast furnace supervision system is designed to support the decision-making in medium and long-term process control. The information presenting the blast furnace operation phenomena is grouped so that little time is needed to obtain the current state of the process. Due to the complexity of the blast furnace process, an expert system to guide and diagnose the short and medium-term blast furnace operation has been developed.

  13. Operation of a computer aided drafting system: improvements, results and hopes

    SciTech Connect

    Millaud, J.; Goulding, F.; Salz, P.; Shimada, K.

    1983-10-01

    A two workstation Computer Aided Drafting system has been in operation since September 1982 at the Lawrence Berkeley Laboratory, Department of Instrument Science and Engineering. Improvements made to the original hardware and software configuration are described. Benefits from this installation are reported and future developments are outlined.

  14. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  15. SAMO (Sistema de Apoyo Mechanizado a la Operacion): An operational aids computer system

    SciTech Connect

    Stormer, T.D.; Laflor, E.V.

    1989-01-01

    SAMO (Sistema de Apoyo Mechanizado a la Operacion) is a sensor-driven, computer-based, graphic display system designed by Westinghouse to aid the A. N. Asco operations staff during all modes of plant operations, including emergencies. The SAMO system is being implemented in the A. N. Asco plant in two phases that coincide with consecutive refueling outages for each of two nuclear units at the Asco site. Phase 1 of the SAMO system implements the following functions: (1) emergency operational aids, (2) postaccident monitoring, (3) plant graphics display, (4) high-speed transient analysis recording, (5) historical data collection, storage, and retrieval, (6) sequence of events, and (7) posttrip review. During phase 2 of the SAMO project, the current plant computer will be removed and the functions now performed by the plant computer will be performed by the SAMO system. In addition, the following functions will be implemented: (1) normal and simple transients operational aid, (2) plant information graphics; and (3) real-time radiological off-site dose calculation.

  16. A personal computer based interactive software for power system operation education

    SciTech Connect

    Hsu, Y.Y.; Yang, C.C. ); Su, C.C. )

    1992-11-01

    The use of a personal computer based interactive software to aid instruction in power system operation is described in this paper. The software is designed to be used as a teaching aid for the course Power System Operation at National Taiwan University. The main programs included in the package include short term load forecasting and unit commitment. Other supporting routines include power flow analysis, static security assessment, small signal stability analysis, and transient stability analysis. To promote the students' interest in the course, a user friendly interface and interactive windows have been developed. The integrated software package proves to be useful for educational and research purposes.

  17. Design of an air traffic computer simulation system to support investigation of civil tiltrotor aircraft operations

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1992-01-01

    This research project addresses the need to provide an efficient and safe mechanism to investigate the effects and requirements of the tiltrotor aircraft's commercial operations on air transportation infrastructures, particularly air traffic control. The mechanism of choice is computer simulation. Unfortunately, the fundamental paradigms of the current air traffic control simulation models do not directly support the broad range of operational options and environments necessary to study tiltrotor operations. Modification of current air traffic simulation models to meet these requirements does not appear viable given the range and complexity of issues needing resolution. As a result, the investigation of systemic, infrastructure issues surrounding the effects of tiltrotor commercial operations requires new approaches to simulation modeling. These models should be based on perspectives and ideas closer to those associated with tiltrotor air traffic operations.

  18. A COMPUTER-ASSIST MATERIAL TRACKING SYSTEM AS A CRITICALITY SAFETY AID TO OPERATORS

    SciTech Connect

    Claybourn, R V; Huang, S T

    2007-03-30

    In today's compliant-driven environment, fissionable material handlers are inundated with work control rules and procedures in carrying out nuclear operations. Historically, human errors are one of the key contributors of various criticality accidents. Since moving and handling fissionable materials are key components of their job functions, any means that can be provided to assist operators in facilitating fissionable material moves will help improve operational efficiency and enhance criticality safety implementation. From the criticality safety perspective, operational issues have been encountered in Lawrence Livermore National Laboratory (LLNL) plutonium operations. Those issues included lack of adequate historical record keeping for the fissionable material stored in containers, a need for a better way of accommodating operations in a research and development setting, and better means of helping material handlers in carrying out various criticality safety controls. Through the years, effective means were implemented including better work control process, standardized criticality control conditions (SCCC) and relocation of criticality safety engineers to the plutonium facility. Another important measure taken was to develop a computer data acquisition system for criticality safety assessment, which is the subject of this paper. The purpose of the Criticality Special Support System (CSSS) is to integrate many of the proven operational support protocols into a software system to assist operators with assessing compliance to procedures during the handling and movement of fissionable materials. Many nuclear facilities utilize mass cards or a computer program to track fissionable material mass data in operations. Additional item specific data such as, the presence of moderators or close fitting reflectors, could be helpful to fissionable material handlers in assessing compliance to SCCC's. Computer-assist checking of a workstation material inventory against the

  19. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Operations concept report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  20. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    PubMed Central

    Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353

  1. Study of the operation and maintenance of computer systems to meet the requirements of 10 CFR 73. 55

    SciTech Connect

    Lewis, J.R.; Byers, K.R.; Fluckiger, J.D.; McBride, K.C.

    1986-01-01

    The Pacific Northwest Laboratory has studied the operation and maintenance of computer-managed systems that can help nuclear power plant licensees to meet the physical security requirements of 10 CFR 73.55 (for access control, alarm monitoring, and alarm recording). This report of that study describes a computer system quality assurance program that is based on a system of related internal controls. A discussion of computer system evaluation includes verification and validation mechanisms for assuring that requirements are stated and that the product fulfills these requirements. Finally, the report describes operator and security awareness training and a computer system preventive maintenance program. 24 refs.

  2. Combination of artificial intelligence and procedural language programs in a computer application system supporting nuclear reactor operations

    SciTech Connect

    Town, G.G.; Stratton, R.C.

    1985-01-01

    A computer application system is described which provides nuclear reactor power plant operators with an improved decision support system. This system combines traditional computer applications such as graphics display with artificial intelligence methodologies such as reasoning and diagnosis so as to improve plant operability. This paper discusses the issues, and a solution, involved with the system integration of applications developed using traditional and artificial intelligence languages.

  3. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    DOEpatents

    Tomkins, James L.; Camp, William J.

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  4. System analysis for the Huntsville Operational Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, E. M.

    1983-01-01

    A simulation model was developed and programmed in three languages BASIC, PASCAL, and SLAM. Two of the programs are included in this report, the BASIC and the PASCAL language programs. SLAM is not supported by NASA/MSFC facilities and hence was not included. The statistical comparison of simulations of the same HOSC system configurations are in good agreement and are in agreement with the operational statistics of HOSC that were obtained. Three variations of the most recent HOSC configuration was run and some conclusions drawn as to the system performance under these variations.

  5. Common data buffer system. [communication with computational equipment utilized in spacecraft operations

    NASA Technical Reports Server (NTRS)

    Byrne, F. (Inventor)

    1981-01-01

    A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.

  6. Computer-Analyzed Custodial Operations.

    ERIC Educational Resources Information Center

    Ross, Dave

    1984-01-01

    A computer program utilized to generate detailed reports of labor requirements saved almost $700,000 in custodial labor costs for the Facilities Maintenance and Operations Division of Oakland County, Michigan. The program also is used for a quality-control audit system during inspections to evaluate cleaned rooms. (MLF)

  7. Using Computer Technology in the Automation of Clinical and Operating Systems in Emergency Medicine

    PubMed Central

    Guarisco, Joseph S.

    2001-01-01

    The practical application of Emergency Medicine throughout the country has historically been viewed by healthcare workers and patients as one of inefficiency and chaos. Believing that the practice of Emergency Medicine was, to the contrary, predictable, we at Ochsner felt that tremendous improvements in efficiency could be won if the vast amount of data generated in our experience of nearly 40,000 Emergency Department visits per year could be harvested. Such improvements would require the employment of computer technology and powerful database management systems. By applying these tools to profile the practice of Emergency Medicine in our institution, we were able to harvest important clinical and operational information that was ultimately used to improve department efficiency and productivity. The ability to analyze data and manage processes within the Emergency Department allowed us to target resources much more efficiently, significantly reducing nonproductive work. The collected data were sorted and filtered by a host of variables creating the ability to profile subsets of our practice—most importantly, physician practice habits and performance. Furthermore, the development of “patient tracking” software allowed us to update, view, and trend data in real-time and tweak clinical and operational processes simultaneously. The data-driven, analytical approach to the management of the Emergency Department has yielded significant improvements in service to our patients and lower operational costs. PMID:21765721

  8. Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment

    ERIC Educational Resources Information Center

    He, Aiguo

    2011-01-01

    Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…

  9. YASS: A System Simulator for Operating System and Computer Architecture Teaching and Learning

    ERIC Educational Resources Information Center

    Mustafa, Besim

    2013-01-01

    A highly interactive, integrated and multi-level simulator has been developed specifically to support both the teachers and the learners of modern computer technologies at undergraduate level. The simulator provides a highly visual and user configurable environment with many pedagogical features aimed at facilitating deep understanding of concepts…

  10. Improved operating scenarios of the DIII-D tokamak as a result of the addition of UNIX computer systems

    SciTech Connect

    Henline, P.A.

    1995-10-01

    The increased use of UNIX based computer systems for machine control, data handling and analysis has greatly enhanced the operating scenarios and operating efficiency of the DRI-D tokamak. This paper will describe some of these UNIX systems and their specific uses. These include the plasma control system, the electron cyclotron heating control system, the analysis of electron temperature and density measurements and the general data acquisition system (which is collecting over 130 Mbytes of data). The speed and total capability of these systems has dramatically affected the ability to operate DIII-D. The improved operating scenarios include better plasma shape control due to the more thorough MHD calculations done between shots and the new ability to see the time dependence of profile data as it relates across different spatial locations in the tokamak. Other analysis which engenders improved operating abilities will be described.

  11. Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.

    2000-01-01

    A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.

  12. Whenever You Use a Computer You Are Using a Program Called an Operating System.

    ERIC Educational Resources Information Center

    Cook, Rick

    1984-01-01

    Examines design, features, and shortcomings of eight disk-based operating systems designed for general use that are popular or most likely to affect the future of microcomputing. Included are the CP/M family, MS-DOS, Apple DOS/ProDOS, Unix, Pick, the p-System, TRSDOS, and Macintosh/Lisa. (MBR)

  13. Digital computer operation of a nuclear reactor

    DOEpatents

    Colley, Robert W.

    1984-01-01

    A method is described for the safe operation of a complex system such as a nuclear reactor using a digital computer. The computer is supplied with a data base containing a list of the safe state of the reactor and a list of operating instructions for achieving a safe state when the actual state of the reactor does not correspond to a listed safe state, the computer selects operating instructions to return the reactor to a safe state.

  14. Digital computer operation of a nuclear reactor

    DOEpatents

    Colley, R.W.

    1982-06-29

    A method is described for the safe operation of a complex system such as a nuclear reactor using a digital computer. The computer is supplied with a data base containing a list of the safe state of the reactor and a list of operating instructions for achieving a safe state when the actual state of the reactor does not correspond to a listed safe state, the computer selects operating instructions to return the reactor to a safe state.

  15. Computer systems

    NASA Technical Reports Server (NTRS)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  16. Towards an integral computer environment supporting system operations analysis and conceptual design

    NASA Technical Reports Server (NTRS)

    Barro, E.; Delbufalo, A.; Rossi, F.

    1994-01-01

    VITROCISET has in house developed a prototype tool named System Dynamic Analysis Environment (SDAE) to support system engineering activities in the initial definition phase of a complex space system. The SDAE goal is to provide powerful means for the definition, analysis, and trade-off of operations and design concepts for the space and ground elements involved in a mission. For this purpose SDAE implements a dedicated modeling methodology based on the integration of different modern (static and dynamic) analysis and simulation techniques. The resulting 'system model' is capable of representing all the operational, functional, and behavioral aspects of the system elements which are part of a mission. The execution of customized model simulations enables: the validation of selected concepts with respect to mission requirements; the in-depth investigation of mission specific operational and/or architectural aspects; and the early assessment of performances required by the system elements to cope with mission constraints and objectives. Due to its characteristics, SDAE is particularly tailored for nonconventional or highly complex systems, which require a great analysis effort in their early definition stages. SDAE runs under PC-Windows and is currently used by VITROCISET system engineering group. This paper describes the SDAE main features, showing some tool output examples.

  17. Using the transportable, computer-operated, liquid-scintillator fast-neutron spectrometer system

    SciTech Connect

    Thorngate, J.H.

    1988-11-01

    When a detailed energy spectrum is needed for radiation-protection measurements from approximately 1 MeV up to several tens of MeV, organic-liquid scintillators make good neutron spectrometers. However, such a spectrometer requires a sophisticated electronics system and a computer to reduce the spectrum from the recorded data. Recently, we added a Nuclear Instrument Module (NIM) multichannel analyzer and a lap-top computer to the NIM electronics we have used for several years. The result is a transportable fast-neutron spectrometer system. The computer was programmed to guide the user through setting up the system, calibrating the spectrometer, measuring the spectrum, and reducing the data. Measurements can be made over three energy ranges, 0.6--2 MeV, 1.1--8 MeV, or 1.6--16 MeV, with the spectrum presented in 0.1-MeV increments. Results can be stored on a disk, presented in a table, and shown in graphical form. 5 refs., 51 figs.

  18. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  19. Operator Station Design System - A computer aided design approach to work station layout

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.

    1979-01-01

    The Operator Station Design System is resident in NASA's Johnson Space Center Spacecraft Design Division Performance Laboratory. It includes stand-alone minicomputer hardware and Panel Layout Automated Interactive Design and Crew Station Assessment of Reach software. The data base consists of the Shuttle Transportation System Orbiter Crew Compartment (in part), the Orbiter payload bay and remote manipulator (in part), and various anthropometric populations. The system is utilized to provide panel layouts, assess reach and vision, determine interference and fit problems early in the design phase, study design applications as a function of anthropometric and mission requirements, and to accomplish conceptual design to support advanced study efforts.

  20. Design of an air traffic computer simulation system to support investigation of civil tiltrotor aircraft operations

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1993-01-01

    The TATSS Project's goal was to develop a design for computer software that would support the attainment of the following objectives for the air traffic simulation model: (1) Full freedom of movement for each aircraft object in the simulation model. Each aircraft object may follow any designated flight plan or flight path necessary as required by the experiment under consideration. (2) Object position precision up to +/- 3 meters vertically and +/- 15 meters horizontally. (3) Aircraft maneuvering in three space with the object position precision identified above. (4) Air traffic control operations and procedures. (5) Radar, communication, navaid, and landing aid performance. (6) Weather. (7) Ground obstructions and terrain. (8) Detection and recording of separation violations. (9) Measures of performance including deviations from flight plans, air space violations, air traffic control messages per aircraft, and traditional temporal based measures.

  1. Potential applications of artificial intelligence in computer-based management systems for mixed waste incinerator facility operation

    SciTech Connect

    Rivera, A.L.; Singh, S.P.N.; Ferrada, J.J.

    1991-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site, designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conversion and Recovery Act (RCRA). Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. This presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. This paper describes mixed waste incinerator facility performance-oriented tasks that could be assisted by Artificial Intelligence (AI) and the requirements for AI tools that would implement these algorithms in a computer-based system. 4 figs., 1 tab.

  2. A Prototype System for a Computer-Based Statewide Film Library Network: A Model for Operation. Statewide Film Library Network: System-1 Specifications - Files.

    ERIC Educational Resources Information Center

    Sullivan, Todd

    Using an IBM System/360 Model 50 computer, the New York Statewide Film Library Network schedules film use, reports on materials handling and statistics, and provides for interlibrary loan of films. Communications between the film libraries and the computer are maintained by Teletype model 33 ASR Teletypewriter terminals operating on TWX…

  3. Computer controlled antenna system

    NASA Technical Reports Server (NTRS)

    Raumann, N. A.

    1972-01-01

    The application of small computers using digital techniques for operating the servo and control system of large antennas is discussed. The advantages of the system are described. The techniques were evaluated with a forty foot antenna and the Sigma V computer. Programs have been completed which drive the antenna directly without the need for a servo amplifier, antenna position programmer or a scan generator.

  4. Operations management system

    NASA Technical Reports Server (NTRS)

    Brandli, A. E.; Eckelkamp, R. E.; Kelly, C. M.; Mccandless, W.; Rue, D. L.

    1990-01-01

    The objective of an operations management system is to provide an orderly and efficient method to operate and maintain aerospace vehicles. Concepts are described for an operations management system and the key technologies are highlighted which will be required if this capability is brought to fruition. Without this automation and decision aiding capability, the growing complexity of avionics will result in an unmanageable workload for the operator, ultimately threatening mission success or survivability of the aircraft or space system. The key technologies include expert system application to operational tasks such as replanning, equipment diagnostics and checkout, global system management, and advanced man machine interfaces. The economical development of operations management systems, which are largely software, will require advancements in other technological areas such as software engineering and computer hardware.

  5. A VIRTUAL OPERATING SYSTEM

    SciTech Connect

    Hall, Dennis E.; Scherrer, Deborah K.; Sventek, Joseph S.

    1980-05-01

    Significant progress toward disentangling computing environments from their under lying operating systern has been made. An approach is presented that achieves inter-system uniformity at all three levels of user interface - virtual machine, utilities, and command language. Under specifiable conditions, complete uniformity is achievable without disturbing the underlying operating system. The approach permits accurate computation of the cost to move both people and software to a new system. The cost of moving people is zero, and the cost of moving software is equal to the cost of implementing a virtual machine. Efficiency is achieved through optimization of the primitive functions.

  6. Kerman Photovoltaic Power Plant R&D data collection computer system operations and maintenance

    SciTech Connect

    Rosen, P.B.

    1994-06-01

    The Supervisory Control and Data Acquisition (SCADA) system at the Kerman PV Plant monitors 52 analog, 44 status, 13 control, and 4 accumulator data points in real-time. A Remote Terminal Unit (RTU) polls 7 peripheral data acquisition units that are distributed throughout the plant once every second, and stores all analog, status, and accumulator points that have changed since the last scan. The R&D Computer, which is connected to the SCADA RTU via a RS-232 serial link, polls the RTU once every 5-7 seconds and records any values that have changed since the last scan. A SCADA software package called RealFlex runs on the R&D computer and stores all updated data values taken from the RTU, along with a time-stamp for each, in a historical real-time database. From this database, averages of all analog data points and snapshots of all status points are generated every 10 minutes and appended to a daily file. These files are downloaded via modem by PVUSA/Davis staff every day, and the data is placed into the PVUSA database.

  7. Laser performance operations model (LPOM): The computational system that automates the setup and performance analysis of the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Shaw, Michael; House, Ronald

    2015-02-01

    The National Ignition Facility (NIF) is a stadium-sized facility containing a 192-beam, 1.8 MJ, 500-TW, 351-nm laser system together with a 10-m diameter target chamber with room for many target diagnostics. NIF is the world's largest laser experimental system, providing a national center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. A computational system, the Laser Performance Operations Model (LPOM) has been developed that automates the laser setup process, and accurately predict laser energetics. LPOM uses diagnostic feedback from previous NIF shots to maintain accurate energetics models (gains and losses), as well as links to operational databases to provide `as currently installed' optical layouts for each of the 192 NIF beamlines. LPOM deploys a fully integrated laser physics model, the Virtual Beamline (VBL), in its predictive calculations in order to meet the accuracy requirements of NIF experiments, and to provide the ability to determine the damage risk to optical elements throughout the laser chain. LPOM determines the settings of the injection laser system required to achieve the desired laser output, provides equipment protection, and determines the diagnostic setup. Additionally, LPOM provides real-time post shot data analysis and reporting for each NIF shot. The LPOM computation system is designed as a multi-host computational cluster (with 200 compute nodes, providing the capability to run full NIF simulations fully parallel) to meet the demands of both the controls systems within a shot cycle, and the NIF user community outside of a shot cycle.

  8. A web-based remote radiation treatment planning system using the remote desktop function of a computer operating system: a preliminary report.

    PubMed

    Suzuki, Keishiro; Hirasawa, Yukinori; Yaegashi, Yuji; Miyamoto, Hideki; Shirato, Hiroki

    2009-01-01

    We developed a web-based, remote radiation treatment planning system which allowed staff at an affiliated hospital to obtain support from a fully staffed central institution. Network security was based on a firewall and a virtual private network (VPN). Client computers were installed at a cancer centre, at a university hospital and at a staff home. We remotely operated the treatment planning computer using the Remote Desktop function built in to the Windows operating system. Except for the initial setup of the VPN router, no special knowledge was needed to operate the remote radiation treatment planning system. There was a time lag that seemed to depend on the volume of data traffic on the Internet, but it did not affect smooth operation. The initial cost and running cost of the system were reasonable.

  9. A web-based remote radiation treatment planning system using the remote desktop function of a computer operating system: a preliminary report.

    PubMed

    Suzuki, Keishiro; Hirasawa, Yukinori; Yaegashi, Yuji; Miyamoto, Hideki; Shirato, Hiroki

    2009-01-01

    We developed a web-based, remote radiation treatment planning system which allowed staff at an affiliated hospital to obtain support from a fully staffed central institution. Network security was based on a firewall and a virtual private network (VPN). Client computers were installed at a cancer centre, at a university hospital and at a staff home. We remotely operated the treatment planning computer using the Remote Desktop function built in to the Windows operating system. Except for the initial setup of the VPN router, no special knowledge was needed to operate the remote radiation treatment planning system. There was a time lag that seemed to depend on the volume of data traffic on the Internet, but it did not affect smooth operation. The initial cost and running cost of the system were reasonable. PMID:19948709

  10. Measurement-based analysis of error latency. [in computer operating system

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  11. Computer programs: Operational and mathematical, a compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.

  12. Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170174 computers ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170-174 computers - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  13. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  14. Description and theory of operation of the computer by-pass system for the NASA F-8 digital fly-by-wire control system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A triplex digital flight control system was installed in a NASA F-8C airplane to provide fail operate, full authority control. The triplex digital computers and interface circuitry process the pilot commands and aircraft motion feedback parameters according to the selected control laws, and they output the surface commands as an analog signal to the servoelectronics for position control of the aircraft's power actuators. The system and theory of operation of the computer by pass and servoelectronics are described and an automated ground test for each axis is included.

  15. Development and operation of a prototype cone-beam computed tomography system for X-ray medical imaging

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Kim, Ryun Kyung; Kim, Cho-Rong; Yang, Keedong; Huh, Young; Jeon, Sungchae; Park, Justin C.; Song, Bongyong; Song, William Y.

    2014-01-01

    This paper describes the development of a prototype cone-beam computed tomography (CBCT) system for clinical use. The overall system design in terms of physical characteristics, geometric calibration methods, and three-dimensional image reconstruction algorithms are described. Our system consists of an X-ray source and a large-area flat-panel detector with the axial dimension large enough for most clinical applications when acquired in a full gantry rotation mode. Various elaborate methods are applied to measure, analyze and calibrate the system for imaging. The electromechanical and the radiographic subsystems through the synchronized control include: gantry rotation and speed, tube rotor, the high-frequency generator (kVp, mA, exposure time and repetition rate), and the reconstruction server (imaging acquisition and reconstruction). The operator can select between analytic and iterative reconstruction methods. Our prototype system contains the latest hardware and reconstruction algorithms and, thus, represents a step forward in CBCT technology.

  16. ALMA correlator computer systems

    NASA Astrophysics Data System (ADS)

    Pisano, Jim; Amestica, Rodrigo; Perez, Jesus

    2004-09-01

    We present a design for the computer systems which control, configure, and monitor the Atacama Large Millimeter Array (ALMA) correlator and process its output. Two distinct computer systems implement this functionality: a rack- mounted PC controls and monitors the correlator, and a cluster of 17 PCs process the correlator output into raw spectral results. The correlator computer systems interface to other ALMA computers via gigabit Ethernet networks utilizing CORBA and raw socket connections. ALMA Common Software provides the software infrastructure for this distributed computer environment. The control computer interfaces to the correlator via multiple CAN busses and the data processing computer cluster interfaces to the correlator via sixteen dedicated high speed data ports. An independent array-wide hardware timing bus connects to the computer systems and the correlator hardware ensuring synchronous behavior and imposing hard deadlines on the control and data processor computers. An aggregate correlator output of 1 gigabyte per second with 16 millisecond periods and computational data rates of approximately 1 billion floating point operations per second define other hard deadlines for the data processing computer cluster.

  17. Co-Operative Development Programme on Computer-Based Learning Systems for Universities.

    ERIC Educational Resources Information Center

    Unite de Coordination de la Documentation et d'Incitation a la Recherche, Louvain (Belgium).

    The Documentation Coordination and Research Incentive Unit (UCODI) is part of an international effort to increase cooperation and coordination in investigations of the possibilities of computer-assisted instruction by providing documentation, information, coordination of research incentives, advice, and assistance to researchers, teachers, and…

  18. Visions image operating system

    SciTech Connect

    Kohler, R.R.; Hanson, A.R.

    1982-01-01

    The image operating system is a complete software environment specifically designed for dynamic experimentation in scene analysis. The IOS consists of a high-level interpretive control language (LISP) with efficient image operators in a noninterpretive language. The image operators are viewed as local operators to be applied in parallel at all pixels to a set of input images. In order to carry out complex image analysis experiments an environment conducive to such experimentation was needed. This environment is provided by the visions image operating system based on a computational structure known as a processing cone proposed by Hanson and Riseman (1974, 1980) and implemented on a VAX-11/780 running VMS. 6 references.

  19. An Operational System for Subject Switching between Controlled Vocabularies: A Computational Linguistics Approach.

    ERIC Educational Resources Information Center

    Silvester, June P.; And Others

    This report describes a new automated process that pioneers full-scale operational use of subject switching by the NASA (National Aeronautics and Space Administration) Scientific and Technical Information (STI) Facility. The subject switching process routinely translates machine-readable subject terms from one controlled vocabulary into the…

  20. Pyrolaser Operating System

    NASA Technical Reports Server (NTRS)

    Roberts, Floyd E., III

    1994-01-01

    Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.

  1. Adaptable structural synthesis using advanced analysis and optimization coupled by a computer operating system

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1979-01-01

    A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.

  2. An operational system for subject switching between controlled vocabularies: A computational linguistics approach

    NASA Technical Reports Server (NTRS)

    Silvester, J. P.; Newton, R.; Klingbiel, P. H.

    1984-01-01

    The NASA Lexical Dictionary (NLD), a system that automatically translates input subject terms to those of NASA, was developed in four phases. Phase One provided Phrase Matching, a context sensitive word-matching process that matches input phrase words with any NASA Thesaurus posting (i.e., index) term or Use reference. Other Use references have been added to enable the matching of synonyms, variant spellings, and some words with the same root. Phase Two provided the capability of translating any individual DTIC term to one or more NASA terms having the same meaning. Phase Three provided NASA terms having equivalent concepts for two or more DTIC terms, i.e., coordinations of DTIC terms. Phase Four was concerned with indexer feedback and maintenance. Although the original NLD construction involved much manual data entry, ways were found to automate nearly all but the intellectual decision-making processes. In addition to finding improved ways to construct a lexical dictionary, applications for the NLD have been found and are being developed.

  3. GT-MSOCC - A domain for research on human-computer interaction and decision aiding in supervisory control systems. [Georgia Tech - Multisatellite Operations Control Center

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1987-01-01

    The Georgia Tech-Multisatellite Operations Control Center (GT-MSOCC), a real-time interactive simulation of the operator interface to a NASA ground control system for unmanned earth-orbiting satellites, is described. The GT-MSOCC program for investigating a range of modeling, decision aiding, and workstation design issues related to the human-computer interaction is discussed. A GT-MSOCC operator function model is described in which operator actions, both cognitive and manual, are represented as the lowest level discrete control network nodes, and operator action nodes are linked to information needs or system reconfiguration commands.

  4. Microrover Operates With Minimal Computation

    NASA Technical Reports Server (NTRS)

    Miller, David P.; Loch, John L.; Gat, Erann; Desai, Rajiv S.; Angle, Colin; Bickler, Donald B.

    1992-01-01

    Small, light, highly mobile robotic vehicles called "microrovers" use sensors and artificial intelligence to perform complicated tasks autonomously. Vehicle navigates, avoids obstacles, and picks up objects using reactive control scheme selected from among few preprogrammed behaviors to respond to environment while executing assigned task. Under development for exploration and mining of other planets. Also useful in firefighting, cleaning up chemical spills, and delivering materials in factories. Reactive control scheme and principle of behavior-description language useful in reducing computational loads in prosthetic limbs and automotive collision-avoidance systems.

  5. Operator control of interneural computing machines.

    PubMed

    Shih, Mau-Hsiang; Tsai, Feng-Sheng

    2013-12-01

    A dynamic representation of neural population responses asserts that motor cortex is a flexible pattern generator sending rhythmic, oscillatory signals to generate multiphasic patterns of movement. This raises a question concerning the design and control of new computing machines that mimic the oscillatory patterns and multiphasic patterns seen in neural systems. To address this issue, we design an interneural computing machine (INCM) made of plastic random interneural connections. We develop a mechanical way to measure collective ensemble firing of neurons in INCM. Two sorts of plasticity operators are derived from the measure of synchronous neural activity and the measure of self-sustaining neural activity, respectively. Such plasticity operators conduct activity-dependent operation to modify the network structure of INCM. The activity-dependent operation meets the neurobiological perspective of Hebbian synaptic plasticity and displays the tendency toward circulation breaking aiming to control neural population dynamics. We call such operation operator control of INCM and develop a population analysis of operator control for measuring how well single neurons of INCM can produce rhythmic, oscillatory activity, but at the level of neural ensembles, generate multiphasic patterns of population responses.

  6. Broadcasting collective operation contributions throughout a parallel computer

    DOEpatents

    Faraj, Ahmad

    2012-02-21

    Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

  7. Computer control for remote wind turbine operation

    SciTech Connect

    Manwell, J.F.; Rogers, A.L.; Abdulwahid, U.; Driscoll, J.

    1997-12-31

    Light weight wind turbines located in harsh, remote sites require particularly capable controllers. Based on extensive operation of the original ESI-807 moved to such a location, a much more sophisticated controller than the original one has been developed. This paper describes the design, development and testing of that new controller. The complete control and monitoring system consists of sensor and control inputs, the control computer, control outputs, and additional equipment. The control code was written in Microsoft Visual Basic on a PC type computer. The control code monitors potential faults and allows the turbine to operate in one of eight states: off, start, run, freewheel, low wind shut down, normal wind shutdown, emergency shutdown, and blade parking. The controller also incorporates two {open_quotes}virtual wind turbines,{close_quotes} including a dynamic model of the machine, for code testing. The controller can handle numerous situations for which the original controller was unequipped.

  8. IMES-Ural: the system of the computer programs for operational analysis of power flow distribution using telemetric data

    SciTech Connect

    Bogdanov, V.A.; Bol'shchikov, A.A.; Zifferman, E.O.

    1981-02-01

    A system of computer programs was described which enabled the user to perform real-time calculation and analysis of the current flow in the 500 kV network of the Ural Regional Electric Power Plant for all possible variations of the network, based on teleinformation and correctable equivalent parameters of the 220 to 110 kV network.

  9. KS-FSOPS: A computer-aided simulation system for the in-core fuel shuffling operation for Taipower`s Kuosheng nuclear power plant

    SciTech Connect

    Kuo, W.S.; Song, T.C.

    1996-08-01

    A computer-aided simulation system for the in-core refueling shuffle operation was developed for the Kuosheng nuclear power plant of Taiwan Power Company. With this specially designed system (KS-FSOPS), the complete and complex fuel shuffling sequences can be clearly and vividly displayed with color graphics on a personal computer. Nuclear engineers can use KS-FSOPS to simulate the process of fuel shuffling operation, identify the potential safety problems which can not be easily found manually, and simultaneously monitor the shuffling sequences with on-site operation in the refueling building. In effect, the traditional but inefficient take-board display can be replaced with this fancy system. Developed on the Windows 3.1 environment and implemented on an 80486 personal computer, KS-FSOPS is a handy and table tool to assist nuclear engineers in the refueling operation. Potential safety issues such as the constraint of cold shutdown margin, the falling of control rods, the restriction f control rod withdrawal, and the correctness of shuffling positions, are continuously checked during the refueling operation. KS-FSOPS has been used in the most recent refueling outage for the Kuosheng nuclear power plant. In the near future, the system will be extended to other Taipower`s nuclear power plants.

  10. Computer Center: CIBE Systems.

    ERIC Educational Resources Information Center

    Crovello, Theodore J.

    1982-01-01

    Differentiates between computer systems and Computers in Biological Education (CIBE) systems (computer system intended for use in biological education). Describes several CIBE stand alone systems: single-user microcomputer; single-user microcomputer/video-disc; multiuser microcomputers; multiuser maxicomputer; and local and long distance computer…

  11. Continuity of computer-aided drafting operations

    SciTech Connect

    Jacobson, L.D.

    1987-09-01

    The operating performance, operating procedures, and equipment added are discussed for the Computer Aided Drafting (CAD) operation at UNC Nuclear Industries before consolidation of operating contracts at the US Department of Energy (DOE) facilities located at the Hanford Site, near Richland, Washington.

  12. Advanced Operating System Technologies

    NASA Astrophysics Data System (ADS)

    Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro

    In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC

  13. The embedded operating system project

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1984-01-01

    This progress report describes research towards the design and construction of embedded operating systems for real-time advanced aerospace applications. The applications concerned require reliable operating system support that must accommodate networks of computers. The report addresses the problems of constructing such operating systems, the communications media, reconfiguration, consistency and recovery in a distributed system, and the issues of realtime processing. A discussion is included on suitable theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based systems. In particular, this report addresses: atomic actions, fault tolerance, operating system structure, program development, reliability and availability, and networking issues. This document reports the status of various experiments designed and conducted to investigate embedded operating system design issues.

  14. Computer-assisted stereotactic neurological surgery: pre-planning and on-site real-time operating control and simulation system

    NASA Astrophysics Data System (ADS)

    Zamorano, Lucia J.; Jiang, Charlie Z. W.

    1993-09-01

    In this decade the concept and development of computer assisted stereotactic neurological surgery has improved dramatically. First, the computer network replaced the tape as the data transportation media. Second, newer systems include multi-modality image correlation and frameless stereotactics as an integral part of their functionality, and offer extensive assistance to the neurosurgeon from the preplanning stages to and throughout the operation itself. These are very important changes, and have spurred the development of many interesting techniques. Successful systems include the ISG and NSPS-3.0.

  15. Computer Assisted Operations: Registration Records, Schedules

    ERIC Educational Resources Information Center

    College and University, 1977

    1977-01-01

    Proceedings of AACRAO's 63rd annual meeting cover: computer networking in small colleges; continuous registration; computer logic; computerized academic record overview; on-line registration systems; and analysis of registration and records systems. (LBH)

  16. Planning Systems for Distributed Operations

    NASA Technical Reports Server (NTRS)

    Maxwell, Theresa G.

    2002-01-01

    This viewgraph representation presents an overview of the mission planning process involving distributed operations (such as the International Space Station (ISS)) and the computer hardware and software systems needed to support such an effort. Topics considered include: evolution of distributed planning systems, ISS distributed planning, the Payload Planning System (PPS), future developments in distributed planning systems, Request Oriented Scheduling Engine (ROSE) and Next Generation distributed planning systems.

  17. The embedded operating system project

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1985-01-01

    The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.

  18. Payload operation television system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The Payload Operation Television System is a high performance closed-circuit TV system designed to determine the feasibility of using TV to augment purely visual monitoring of operations, and to establish optimum system design of an operating unit which can ultimately be used to assist the operator of a remotely manipulated space-borne cargo loading device. The TV system assembled on this program is intended for laboratory experimentation which would develop operational techniques and lead to the design of space-borne TV equipment whose purpose would be to assist the astronaut-operator aboard a space station to load payload components. The equipment consists principally of a good quality TV camera capable of high resolving power; a TV monitor; a sync generator for driving camera and monitor; and two pan/tilt units which are remotely controlled by the operator.

  19. Multitasking operating systems for microprocessors

    SciTech Connect

    Cramer, T.

    1981-01-01

    Microprocessors, because of their low cost, low power consumption, and small size, have caused an explosion in the number of innovative computer applications. Although there is a great deal of variation in microprocessor applications software, there is relatively little variation in the operating-system-level software from one application to the next. Nonetheless, operating system software, especially when multitasking is involved, can be very time consuming and expensive to develop. The major microprocessor manufacturers have acknowledged the need for operating systems in microprocessor applications and are now supplying real-time multitasking operating system software that is adaptable to a wide variety of user systems. Use of this existing operating system software will decrease the number of redundant operating system development efforts, thus freeing programmers to work on more creative and productive problems. This paper discusses the basic terminology and concepts involved with multitasking operating systems. It is intended to provide a general understanding of the subject, so that the reader will be prepared to evaluate specific operating system software according to his or her needs. 2 references.

  20. Computers in design construction and operation of automobiles

    SciTech Connect

    Murthy, T.K.; Brebbia, C.A.

    1987-01-01

    The advantages of computer-aided systems in the automotive industry are now widely recognised. Computers are being used from initial conceptual design and feasibility study to the actual construction and ultimate operation of the cars on the road. In the field of car body design, for example, the use of computers is replacing the traditional, if more expensive, testing methods such as in a wind tunnel, due to the extensive development of numerical computational methods. The papers contained in this book are related to the use of computers in such diverse applications as solid modelling, computational fluid dynamics, computer simulation, engine dynamics and other topics in design.

  1. Versados (operating system)

    SciTech Connect

    Glaser, J.G.

    1981-01-01

    Versados is a multitasking operation system designed to meet the requirements of the real-time, online control system environment as well as to support the multiuser software-hardware engineering effort required to develop microprocessor based systems. Versados serves as a major software building block for real-time applications which use the Motorola MC68000 microprocessor and Versamodule board products. It is a modular, multilayered operating system.

  2. Space station operating system study

    NASA Technical Reports Server (NTRS)

    Horn, Albert E.; Harwell, Morris C.

    1988-01-01

    The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.

  3. Operator Performance Support System (OPSS)

    NASA Technical Reports Server (NTRS)

    Conklin, Marlen Z.

    1993-01-01

    In the complex and fast reaction world of military operations, present technologies, combined with tactical situations, have flooded the operator with assorted information that he is expected to process instantly. As technologies progress, this flow of data and information have both guided and overwhelmed the operator. However, the technologies that have confounded many operators today can be used to assist him -- thus the Operator Performance Support Team. In this paper we propose an operator support station that incorporates the elements of Video and Image Databases, productivity Software, Interactive Computer Based Training, Hypertext/Hypermedia Databases, Expert Programs, and Human Factors Engineering. The Operator Performance Support System will provide the operator with an integrating on-line information/knowledge system that will guide expert or novice to correct systems operations. Although the OPSS is being developed for the Navy, the performance of the workforce in today's competitive industry is of major concern. The concepts presented in this paper which address ASW systems software design issues are also directly applicable to industry. the OPSS will propose practical applications in how to more closely align the relationships between technical knowledge and equipment operator performance.

  4. Computer security in DOE distributed computing systems

    SciTech Connect

    Hunteman, W.J.

    1990-01-01

    The modernization of DOE facilities amid limited funding is creating pressure on DOE facilities to find innovative approaches to their daily activities. Distributed computing systems are becoming cost-effective solutions to improved productivity. This paper defines and describes typical distributed computing systems in the DOE. The special computer security problems present in distributed computing systems are identified and compared with traditional computer systems. The existing DOE computer security policy supports only basic networks and traditional computer systems and does not address distributed computing systems. A review of the existing policy requirements is followed by an analysis of the policy as it applies to distributed computing systems. Suggested changes in the DOE computer security policy are identified and discussed. The long lead time in updating DOE policy will require guidelines for applying the existing policy to distributed systems. Some possible interim approaches are identified and discussed. 2 refs.

  5. Human operator identification model and related computer programs

    NASA Technical Reports Server (NTRS)

    Kessler, K. M.; Mohr, J. N.

    1978-01-01

    Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.

  6. Computations involving differential operators and their actions on functions

    NASA Technical Reports Server (NTRS)

    Crouch, Peter E.; Grossman, Robert; Larson, Richard

    1991-01-01

    The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.

  7. Expert-System Consultant To Operating Personnel

    NASA Technical Reports Server (NTRS)

    Heard, Astrid E.; Pinkowski, Patrick P.; Adler, Richard M.; Hosken, R. Bruce

    1992-01-01

    Artificial intelligence aids engineers and technicians in controlling and monitoring complicated systems. Operations Analyst for Distributed Systems (OPERA) software is developmental suite of expert-system computer programs helping engineers and technicians operating from number of computer workstations to control and monitor spacecraft during prelaunch and launch phases of operation. OPERA designed to serve as consultant to operating engineers and technicians. It preprocesses incoming data, using expertise collected from conglomerate of specialists in design and operation of various parts of system. Driven by menus and mouse-activated commands. Modified versions of OPERA used in chemical-processing plants, factories, banks, and other enterprises in which there are distributed-computer systems including computers that monitor or control other computers.

  8. Artificial intelligence program in a computer application supporting reactor operations

    SciTech Connect

    Stratton, R.C.; Town, G.G.

    1985-01-01

    Improving nuclear reactor power plant operability is an ever-present concern for the nuclear industry. The definition of plant operability involves a complex interaction of the ideas of reliability, safety, and efficiency. This paper presents observations concerning the issues involved and the benefits derived from the implementation of a computer application which combines traditional computer applications with artificial intelligence (AI) methodologies. A system, the Component Configuration Control System (CCCS), is being installed to support nuclear reactor operations at the Experimental Breeder Reactor II.

  9. V-TECS Guide for Computer Operator.

    ERIC Educational Resources Information Center

    South Carolina State Dept. of Education, Columbia. Office of Vocational Education.

    This V-TECS (Vocational-Technical Consortium of States) Guide is an extension or continuation of the V-TECS catalog for the occupation of computer operator. The guide is designed to help South Carolina teachers to promote the art of learning while teaching subject matter. The guide addresses the three domains of learning: psychomotor, cognitive,…

  10. Payload operation television system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The TV system assembled is intended for laboratory experimentation which would develop operational techniques and lead to the design of space-borne TV equipment whose purpose would be to assist the astronaut-operator aboard a space station to load payload components. The TV system assembled for this program is a black and white, monocular, high performance system. The equipment consists principally of a good quality TV camera capable of high resolving power; a TV monitor; a sync generator for driving camera and monitor; and two pan/tilt units which are remotely controlled by the operator. One pan/tilt unit provides control of the pointing of the camera, the other similarly controls the position of a simulated payload.

  11. SNAP operating system reference manual

    SciTech Connect

    Sabuda, J.D.; Polito, J.; Walker, J.L.; Grant, F.H. III

    1982-03-01

    The SNAP Operating System (SOS) is a FORTRAN 77 program which provides assistance to the safeguards analyst who uses the Safeguards Automated Facility Evaluation (SAFE) and the Safeguards Network Analysis Procedure (SNAP) techniques. Features offered by SOS are a data base system for storing a library of SNAP applications, computer graphics representation of SNAP models, a computer graphics editor to develop and modify SNAP models, a SAFE-to-SNAP interface, automatic generation of SNAP input data, and a computer graphic post-processor for SNAP. The SOS Reference Manual provides detailed application information concerning SOS as well as a detailed discussion of all SOS components and their associated command input formats. SOS was developed for the US Nuclear Regulatory Commission's Office of Nuclear Regulatory Research and the US Naval Surface Weapons Center by Pritsker and Associates, Inc., under contract to Sandia National Laboratories.

  12. [Assessment of the exposure dose value displayed on operator console in a computed tomography system deciding exposure dose from positioning image].

    PubMed

    Sanai, Hiroyasu; Tomomitsu, Tatsushi; Ikenaga, Hiroyuki; Suemori, Shinji; Yanagimoto, Shinichi

    2012-01-01

    The aim of this study was to assess the exposure dose value (DLP) displayed on the operator console in a computed tomography system with automatic exposure control (CT-AEC) which decides the exposure dose from a positioning image. We measured exposure dose with two kinds of CT systems and evaluated the error of the displayed DLP value on the operator console against the measured DLP value. The assessment was performed in three sites: head and neck, upper chest, and lower abdomen. As a result, the errors of displayed value with CT-AEC against the error without CT-AEC in system A (4.09%) were significantly different on two assessment sites (head and neck: -4.02%, upper chest: 6.60%). There is no significant difference on the third assessment site (lower abdomen: 0.06%). On the other hand, those values in system B (8.38%) were almost similar with no significant differences (head and neck: -1.12%, upper chest: -1.85%, lower abdomen: -0.64%). The results show that there were significant differences noted between the errors of displayed value with CT-AEC and without CT-AEC in system A for the head and neck and the upper chest. In conclusion, displayed value with CT-AEC and without CT-AEC were about the same error. However, the possibility that the error depended on a model and the examination site of CT was shown.

  13. Computer model for refinery operations with emphasis on jet fuel production. Volume 3: Detailed systems and programming documentation

    NASA Technical Reports Server (NTRS)

    Dunbar, D. N.; Tunnah, B. G.

    1978-01-01

    The FORTRAN computing program predicts flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuels of varying end point and hydrogen content specifications. The program has a provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case.

  14. A Prototype System for a Computer-Based Statewide Film Library Network: A Model for Operation. Number 3, Statewide Film Library Network: System Write-Up.

    ERIC Educational Resources Information Center

    Auricchio, Dominick

    An overview of materials scheduling, this write-up outlines system components, standardization, costs, limitations, and expansion capabilities of the New York Statewide Film Library Network. Interacting components include research staff; materials libraries; hardware; input/output (operation modes, input format conventions, transaction codes);…

  15. Computational Systems Biology

    SciTech Connect

    McDermott, Jason E.; Samudrala, Ram; Bumgarner, Roger E.; Montogomery, Kristina; Ireton, Renee

    2009-05-01

    Computational systems biology is the term that we use to describe computational methods to identify, infer, model, and store relationships between the molecules, pathways, and cells (“systems”) involved in a living organism. Based on this definition, the field of computational systems biology has been in existence for some time. However, the recent confluence of high throughput methodology for biological data gathering, genome-scale sequencing and computational processing power has driven a reinvention and expansion of this field. The expansions include not only modeling of small metabolic{Ishii, 2004 #1129; Ekins, 2006 #1601; Lafaye, 2005 #1744} and signaling systems{Stevenson-Paulik, 2006 #1742; Lafaye, 2005 #1744} but also modeling of the relationships between biological components in very large systems, incluyding whole cells and organisms {Ideker, 2001 #1124; Pe'er, 2001 #1172; Pilpel, 2001 #393; Ideker, 2002 #327; Kelley, 2003 #1117; Shannon, 2003 #1116; Ideker, 2004 #1111}{Schadt, 2003 #475; Schadt, 2006 #1661}{McDermott, 2002 #878; McDermott, 2005 #1271}. Generally these models provide a general overview of one or more aspects of these systems and leave the determination of details to experimentalists focused on smaller subsystems. The promise of such approaches is that they will elucidate patterns, relationships and general features that are not evident from examining specific components or subsystems. These predictions are either interesting in and of themselves (for example, the identification of an evolutionary pattern), or are interesting and valuable to researchers working on a particular problem (for example highlight a previously unknown functional pathway). Two events have occurred to bring about the field computational systems biology to the forefront. One is the advent of high throughput methods that have generated large amounts of information about particular systems in the form of genetic studies, gene expression analyses (both protein and

  16. Fast graph operations in quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Pérez-Delgado, Carlos A.; Fitzsimons, Joseph F.

    2016-03-01

    The connection between certain entangled states and graphs has been heavily studied in the context of measurement-based quantum computation as a tool for understanding entanglement. Here we show that this correspondence can be harnessed in the reverse direction to yield a graph data structure, which allows for more efficient manipulation and comparison of graphs than any possible classical structure. We introduce efficient algorithms for many transformation and comparison operations on graphs represented as graph states, and prove that no classical data structure can have similar performance for the full set of operations studied.

  17. Operational computer graphics in the flight dynamics environment

    NASA Technical Reports Server (NTRS)

    Jeletic, James F.

    1989-01-01

    Over the past five years, the Flight Dynamics Division of the National Aeronautics and Space Administration's (NASA's) Goddard Space Flight Center has incorporated computer graphics technology into its operational environment. In an attempt to increase the effectiveness and productivity of the Division, computer graphics software systems have been developed that display spacecraft tracking and telemetry data in 2-d and 3-d graphic formats that are more comprehensible than the alphanumeric tables of the past. These systems vary in functionality from real-time mission monitoring system, to mission planning utilities, to system development tools. Here, the capabilities and architecture of these systems are discussed.

  18. A Single Computer-Based System for Both Current Awareness and Retrospective Search: Operating Experience with ASSASSIN

    ERIC Educational Resources Information Center

    Clough, C. R.; Bramwell, K. M.

    1971-01-01

    The various applications of the Agricultural System for Storage and Subsequent Selection of Information (ASSASSIN) are outlined and the ways a single package may be used complete, or in part, or with modification are shown. (2 references) (Author/NH)

  19. Enabling opportunistic resources for CMS Computing Operations

    SciTech Connect

    Hufnagel, Dick

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  20. Enabling opportunistic resources for CMS Computing Operations

    NASA Astrophysics Data System (ADS)

    Hufnagel, D.; CMS Collaboration

    2015-12-01

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  1. Brain computer interface for operating a robot

    NASA Astrophysics Data System (ADS)

    Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed

    2013-10-01

    A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.

  2. Computer-aided dispatching system design specification

    SciTech Connect

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  3. Distributed computing systems programme

    SciTech Connect

    Duce, D.

    1984-01-01

    Publication of this volume coincides with the completion of the U.K. Science and Engineering Research Council's coordinated programme of research in Distributed Computing Systems (DCS) which ran from 1977 to 1984. The volume is based on presentations made at the programme's final conference. The first chapter explains the origins and history of DCS and gives an overview of the programme and its achievements. The remaining sixteen chapters review particular research themes (including imperative and declarative languages, and performance modelling), and describe particular research projects in technical areas including local area networks, design, development and analysis of concurrent systems, parallel algorithm design, functional programming and non-von Neumann computer architectures.

  4. MODELS-3 INSTALLATION PROCEDURES FOR A PERSONAL COMPUTER WITH A NT OPERATING SYSTEM (MODELS-3 VERSION 4.1)

    EPA Science Inventory

    Models-3 is a flexible system designed to simplify the development and use of air quality models and other environmental decision support tools. It is designed for applications ranging from regulatory and policy analysis to understanding the complex interactions of atmospheric...

  5. Unix becoming healthcare's standard operating system.

    PubMed

    Gardner, E

    1991-02-11

    An unfamiliar buzzword is making its way into healthcare executives' vocabulary, as well as their computer systems. Unix is being touted by many industry observers as the most likely candidate to be a standard operating system for minicomputers, mainframes and computer networks.

  6. Portable color multimedia training systems based on monochrome laptop computers (CBT-in-a-briefcase), with spinoff implications for video uplink and downlink in spaceflight operations

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1994-01-01

    This report describes efforts to use digital motion video compression technology to develop a highly portable device that would convert 1990-91 era IBM-compatible and/or MacIntosh notebook computers into full-color, motion-video capable multimedia training systems. An architecture was conceived that would permit direct conversion of existing laser-disk-based multimedia courses with little or no reauthoring. The project did not physically demonstrate certain critical video keying techniques, but their implementation should be feasible. This investigation of digital motion video has spawned two significant spaceflight projects at MSFC: one to downlink multiple high-quality video signals from Spacelab, and the other to uplink videoconference-quality video in realtime and high quality video off-line, plus investigate interactive, multimedia-based techniques for enhancing onboard science operations. Other airborne or spaceborne spinoffs are possible.

  7. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  8. Computer Systems Technician.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This document contains 17 units to consider for use in a tech prep competency profile for the occupation of computer systems technician. All the units listed will not necessarily apply to every situation or tech prep consortium, nor will all the competencies within each unit be appropriate. Several units appear within each specific occupation and…

  9. Integrated support systems for electric utility operations

    SciTech Connect

    Hong, H.W.; Imparato, C.F.; Becker, D.L.; Malinowski, J.H. )

    1992-01-01

    Power system dispatch, the real-time monitoring and coordination of transmission and generation facilities, is the focal point of power system operations. However, dispatch is just one of the many duties of the typical power system operations department. Many computer-based tools and systems are used in support of these duties. Energy management systems (EMS), the centralized, mainframe-, or mini-computer-based systems that support dispatch, have been widely publicized, but few of the other support systems have been given much notice. This article provides an overview of these support tools and systems, frames the major issues faced in systems integration, and describes the path taken to integrate EMS, workstations, desktop computers, networks and applications. Network architecture enables the distribution of real-time operations data throughout the company, from EMS to power plants to district offices, on an unprecedented scale.

  10. Apu/hydraulic/actuator Subsystem Computer Simulation. Space Shuttle Engineering and Operation Support, Engineering Systems Analysis. [for the space shuttle

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Major developments are examined which have taken place to date in the analysis of the power and energy demands on the APU/Hydraulic/Actuator Subsystem for space shuttle during the entry-to-touchdown (not including rollout) flight regime. These developments are given in the form of two subroutines which were written for use with the Space Shuttle Functional Simulator. The first subroutine calculates the power and energy demand on each of the three hydraulic systems due to control surface (inboard/outboard elevons, rudder, speedbrake, and body flap) activity. The second subroutine incorporates the R. I. priority rate limiting logic which limits control surface deflection rates as a function of the number of failed hydraulic. Typical results of this analysis are included, and listings of the subroutines are presented in appendicies.

  11. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  12. The dangers resulting from inaccurate computer-based operative records.

    PubMed

    Knight, L; Yardley, M; Jones, A

    1991-01-01

    The accuracy of a computer-based recording system of operative procedures was audited at a major district general hospital. The system is supposed to provide accurate records of theatre activity, to allow for improved nursing resource allocation and provide surgeons with a basic record of their operations. Mistakes were present in the details of 27% of the cases entered. Such inaccuracies highlight a major danger to surgeons with regard to their accountability for operations attributed to them. Mistakes can only cause further problems with regard to audit and future resource allocation.

  13. Computer assisted tendon tensioning operations on the Auger TLP

    SciTech Connect

    Webb, C.M. III

    1995-05-01

    One of the most critical phases of the tendon installation operation is the tension adjustment of the tendons. During these phases of the operation, length adjustments must be performed that result in correctly distributed tension loads, at the design platform draft, and without net platform inclination. Instrumentation integrated with an on-line computer advisory system accelerates the operation, thereby reducing spread time and risk associated with prolonged exposure. The paper includes a brief discussion of the instrumentation and data gathering and processing system on Auger, the advisory functions that use these data, and the step-by-step procedure to achieve an installed configuration consistent with the design premise.

  14. Computer Algebra System

    SciTech Connect

    1992-05-04

    DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franz Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.

  15. Computer Algebra System

    1992-05-04

    DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

  16. Pacing a data transfer operation between compute nodes on a parallel computer

    DOEpatents

    Blocksome, Michael A.

    2011-09-13

    Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

  17. Computational systems chemical biology.

    PubMed

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  18. SEASAT economic assessment. Volume 10: The SATIL 2 program (a program for the evaluation of the costs of an operational SEASAT system as a function of operational requirements and reliability. [computer programs for economic analysis and systems analysis of SEASAT satellite systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The SATIL 2 computer program was developed to assist with the programmatic evaluation of alternative approaches to establishing and maintaining a specified mix of operational sensors on spacecraft in an operational SEASAT system. The program computes the probability distributions of events (i.e., number of launch attempts, number of spacecraft purchased, etc.), annual recurring cost, and present value of recurring cost. This is accomplished for the specific task of placing a desired mix of sensors in orbit in an optimal fashion in order to satisfy a specified sensor demand function. Flow charts are shown, and printouts of the programs are given.

  19. Cloud Computing for Mission Design and Operations

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Attiyah, Amy; Beswick, Robert; Gerasimantos, Dimitrios

    2012-01-01

    The space mission design and operations community already recognizes the value of cloud computing and virtualization. However, natural and valid concerns, like security, privacy, up-time, and vendor lock-in, have prevented a more widespread and expedited adoption into official workflows. In the interest of alleviating these concerns, we propose a series of guidelines for internally deploying a resource-oriented hub of data and algorithms. These guidelines provide a roadmap for implementing an architecture inspired in the cloud computing model: associative, elastic, semantical, interconnected, and adaptive. The architecture can be summarized as exposing data and algorithms as resource-oriented Web services, coordinated via messaging, and running on virtual machines; it is simple, and based on widely adopted standards, protocols, and tools. The architecture may help reduce common sources of complexity intrinsic to data-driven, collaborative interactions and, most importantly, it may provide the means for teams and agencies to evaluate the cloud computing model in their specific context, with minimal infrastructure changes, and before committing to a specific cloud services provider.

  20. Aircraft Operations Classification System

    NASA Technical Reports Server (NTRS)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  1. Computer memory management system

    DOEpatents

    Kirk, III, Whitson John

    2002-01-01

    A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

  2. Global tree network for computing structures enabling global processing operations

    DOEpatents

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  3. Artificial intelligence issues related to automated computing operations

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  4. Potato operation: computer vision for agricultural robotics

    NASA Astrophysics Data System (ADS)

    Pun, Thierry; Lefebvre, Marc; Gil, Sylvia; Brunet, Denis; Dessimoz, Jean-Daniel; Guegerli, Paul

    1992-03-01

    Each year at harvest time millions of seed potatoes are checked for the presence of viruses by means of an Elisa test. The Potato Operation aims at automatizing the potato manipulation and pulp sampling procedure, starting from bunches of harvested potatoes and ending with the deposit of potato pulp into Elisa containers. Automatizing these manipulations addresses several issues, linking robotic and computer vision. The paper reports on the current status of this project. It first summarizes the robotic aspects, which consist of locating a potato in a bunch, grasping it, positioning it into the camera field of view, pumping the pulp sample and depositing it into a container. The computer vision aspects are then detailed. They concern locating particular potatoes in a bunch and finding the position of the best germ where the drill has to sample the pulp. The emphasis is put on the germ location problem. A general overview of the approach is given, which combines the processing of both frontal and silhouette views of the potato, together with movements of the robot arm (active vision). Frontal and silhouette analysis algorithms are then presented. Results are shown that confirm the feasibility of the approach.

  5. Value of Faster Computation for Power Grid Operation

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu; Elizondo, Marcelo A.

    2012-09-30

    As a result of the grid evolution meeting the information revolution, the power grid is becoming far more complex than it used to be. How to feed data in, perform analysis, and extract information in a real-time manner is a fundamental challenge in today’s power grid operation, not to mention the significantly increased complexity in the smart grid environment. Therefore, high performance computing (HPC) becomes one of the advanced technologies used to meet the requirement of real-time operation. This paper presents benefit case studies to show the value of fast computation for operation. Two fundamental operation functions, state estimation (SE) and contingency analysis (CA), are used as examples. In contrast with today’s tools, fast SE can estimate system status in a few seconds—comparable to measurement cycles. Fast CA can solve more contingencies in a shorter period, reducing the possibility of missing critical contingencies. The benefit case study results clearly show the value of faster computation for increasing the reliability and efficiency of power system operation.

  6. Fossil-fuel power plants: Computer systems for power plant control, maintenance, and operation. October 1976-December 1989 (A Bibliography from the COMPENDEX data base). Report for October 1976-December 1989

    SciTech Connect

    Not Available

    1990-02-01

    This bibliography contains citations concerning fossil-fuel power plant computer systems. Minicomputer and microcomputer systems used for monitoring, process control, performance calculations, alarming, and administrative applications are discussed. Topics emphasize power plant control, maintenance and operation. (Contains 240 citations fully indexed and including a title list.)

  7. MOP /Matrix Operation Programs system/

    NASA Technical Reports Server (NTRS)

    Muller, P. M.

    1968-01-01

    MOP /Matrix Operation Programs/ system consists of a set of FORTRAN 4 subroutines which are related through a small common allocation. The system accomplishes all matrix algebra operations plus related input-output and housekeeping details.

  8. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Zbigniew; Falkowski, Paul

    1990-01-01

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

  9. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Z.; Falkowski, P.

    1990-07-17

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.

  10. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  11. Computational Aeroacoustic Analysis System Development

    NASA Technical Reports Server (NTRS)

    Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.

    2001-01-01

    Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed

  12. Digital optical computers at the optoelectronic computing systems center

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  13. Quantitative computer simulations of extraterrestrial processing operations

    NASA Technical Reports Server (NTRS)

    Vincent, T. L.; Nikravesh, P. E.

    1989-01-01

    The automation of a small, solid propellant mixer was studied. Temperature control is under investigation. A numerical simulation of the system is under development and will be tested using different control options. Control system hardware is currently being put into place. The construction of mathematical models and simulation techniques for understanding various engineering processes is also studied. Computer graphics packages were utilized for better visualization of the simulation results. The mechanical mixing of propellants is examined. Simulation of the mixing process is being done to study how one can control for chaotic behavior to meet specified mixing requirements. An experimental mixing chamber is also being built. It will allow visual tracking of particles under mixing. The experimental unit will be used to test ideas from chaos theory, as well as to verify simulation results. This project has applications to extraterrestrial propellant quality and reliability.

  14. SIMON Host Computer System requirements and recommendations

    SciTech Connect

    Harpring, L.J.

    1990-11-29

    Development Service Order {number sign}90025 requested recommendations for computer hardware, operating systems, and software development utilities based on current and future SIMON software requirements. Since SIMON's main objective is to be dispatched on missions by an operator with little computer experience, user friendly'' hardware and software interfaces are required. Other design criteria include: a fluid software development environment, and hardware and operating systems with minimal maintenance requirements. Also, the hardware should be expandable; extra processor boards should be easily integrated into the existing system. And finally, the use of well established standards for hardware and software should be implemented where practical.

  15. SIMON Host Computer System requirements and recommendations

    SciTech Connect

    Harpring, L.J.

    1990-11-29

    Development Service Order {number_sign}90025 requested recommendations for computer hardware, operating systems, and software development utilities based on current and future SIMON software requirements. Since SIMON`s main objective is to be dispatched on missions by an operator with little computer experience, ``user friendly`` hardware and software interfaces are required. Other design criteria include: a fluid software development environment, and hardware and operating systems with minimal maintenance requirements. Also, the hardware should be expandable; extra processor boards should be easily integrated into the existing system. And finally, the use of well established standards for hardware and software should be implemented where practical.

  16. Automating ATLAS Computing Operations using the Site Status Board

    NASA Astrophysics Data System (ADS)

    J, Andreeva; Iglesias C, Borrego; S, Campana; Girolamo A, Di; I, Dzhunov; Curull X, Espinal; S, Gayazov; E, Magradze; M, Nowotka M.; L, Rinaldi; P, Saiz; J, Schovancova; A, Stewart G.; M, Wright

    2012-12-01

    The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses the SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. The ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The paper will describe how the SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in the SSB. It will demonstrate the positive impact of the use of the SSB on the overall performance of ATLAS computing activities and will overview future plans.

  17. A multiprocessor operating system simulator

    SciTech Connect

    Johnston, G.M.; Campbell, R.H. . Dept. of Computer Science)

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT and T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the Choices family of operating systems for loosely and tightly coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.

  18. A Multiprocessor Operating System Simulator

    NASA Technical Reports Server (NTRS)

    Johnston, Gary M.; Campbell, Roy H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall semester of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT&T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the 'Choices' family of operating systems for loosely- and tightly-coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.

  19. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    NASA Technical Reports Server (NTRS)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  20. SPECTR System Operational Test Report

    SciTech Connect

    W.H. Landman Jr.

    2011-08-01

    This report overviews installation of the Small Pressure Cycling Test Rig (SPECTR) and documents the system operational testing performed to demonstrate that it meets the requirements for operations. The system operational testing involved operation of the furnace system to the design conditions and demonstration of the test article gas supply system using a simulated test article. The furnace and test article systems were demonstrated to meet the design requirements for the Next Generation Nuclear Plant. Therefore, the system is deemed acceptable and is ready for actual test article testing.

  1. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  2. Computer Jet-Engine-Monitoring System

    NASA Technical Reports Server (NTRS)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  3. Parametric Optimization of Some Critical Operating System Functions--An Alternative Approach to the Study of Operating Systems Design

    ERIC Educational Resources Information Center

    Sobh, Tarek M.; Tibrewal, Abhilasha

    2006-01-01

    Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…

  4. Transportation System Concept of Operations

    SciTech Connect

    N. Slater-Thompson

    2006-08-16

    The Nuclear Waste Policy Act of 1982 (NWPA), as amended, authorized the DOE to develop and manage a Federal system for the disposal of SNF and HLW. OCRWM was created to manage acceptance and disposal of SNF and HLW in a manner that protects public health, safety, and the environment; enhances national and energy security; and merits public confidence. This responsibility includes managing the transportation of SNF and HLW from origin sites to the Repository for disposal. The Transportation System Concept of Operations is the core high-level OCRWM document written to describe the Transportation System integrated design and present the vision, mission, and goals for Transportation System operations. By defining the functions, processes, and critical interfaces of this system early in the system development phase, programmatic risks are minimized, system costs are contained, and system operations are better managed, safer, and more secure. This document also facilitates discussions and understanding among parties responsible for the design, development, and operation of the Transportation System. Such understanding is important for the timely development of system requirements and identification of system interfaces. Information provided in the Transportation System Concept of Operations includes: the functions and key components of the Transportation System; system component interactions; flows of information within the system; the general operating sequences; and the internal and external factors affecting transportation operations. The Transportation System Concept of Operations reflects OCRWM's overall waste management system policies and mission objectives, and as such provides a description of the preferred state of system operation. The description of general Transportation System operating functions in the Transportation System Concept of Operations is the first step in the OCRWM systems engineering process, establishing the starting point for the lower level

  5. Chaining direct memory access data transfer operations for compute nodes in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.

    2010-09-28

    Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet.

  6. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  7. Trends; Integrating computer systems

    SciTech Connect

    de Buyl, M. )

    1991-11-04

    This paper reports that computers are invaluable tools in assisting E and P managers with their information management and analysis tasks. Oil companies and software houses are striving to adapt their products and work practices to capitalize on the rapid evolution in computer hardware performance and affordability. Ironically, an investment in computers aimed at reducing risk and cost also contains an element of added risk and cost. Hundreds of millions of dollars have been spent by the oil industry in purchasing hardware and software and in developing software. Unfortunately, these investments may not have completely fulfilled the industry's expectations. The lower return on computer science investments is due to: Unmet expectations in productivity gains. Premature computer hardware and software obsolescence. Inefficient data transfer between software applications. Hidden costs of computer support personnel and vendors.

  8. Operating System Abstraction Layer (OSAL)

    NASA Technical Reports Server (NTRS)

    Yanchik, Nicholas J.

    2007-01-01

    This viewgraph presentation reviews the concept of the Operating System Abstraction Layer (OSAL) and its benefits. The OSAL is A small layer of software that allows programs to run on many different operating systems and hardware platforms It runs independent of the underlying OS & hardware and it is self-contained. The benefits of OSAL are that it removes dependencies from any one operating system, promotes portable, reusable flight software. It allows for Core Flight software (FSW) to be built for multiple processors and operating systems. The presentation discusses the functionality, the various OSAL releases, and describes the specifications.

  9. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  10. Multiple operating system rotation environment moving target defense

    DOEpatents

    Evans, Nathaniel; Thompson, Michael

    2016-03-22

    Systems and methods for providing a multiple operating system rotation environment ("MORE") moving target defense ("MTD") computing system are described. The MORE-MTD system provides enhanced computer system security through a rotation of multiple operating systems. The MORE-MTD system increases attacker uncertainty, increases the cost of attacking the system, reduces the likelihood of an attacker locating a vulnerability, and reduces the exposure time of any located vulnerability. The MORE-MTD environment is effectuated by rotation of the operating systems at a given interval. The rotating operating systems create a consistently changing attack surface for remote attackers.

  11. Automated Computer Access Request System

    NASA Technical Reports Server (NTRS)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  12. The SILEX experiment system operations

    NASA Astrophysics Data System (ADS)

    Demelenne, B.

    1994-11-01

    The European Space Agency is going to conduct an inter orbit link experiment which will connect a low Earth orbiting satellite and a Geostationary satellite via optical terminals. This experiment has been called SILEX (Semiconductor Inter satellite Link EXperiment). Two payloads will be built. One called PASTEL (PASsager de TELecommunication) will be embarked on the French Earth observation satellite SPOT4. The future European experimental data relay satellite ARTEMIS (Advanced Relay and TEchnology MISsion) will carry the OPALE terminal (Optical PAyload Experiment). The principal characteristic of the mission is a 50 Megabits flow of data transmitted via the optical satellite link. The relay satellite will route the data via its feeder link thus permitting a real time reception in the European region of images taken by the observation satellite. The PASTEL terminal has been designed to cover up to 9 communication sessions per day with an average of 5. The number of daily contact opportunities with the low earth orbiting satellite will be increased and the duration will be much longer than the traditional passes over a ground station. The terminals have an autonomy of 24 hours with respect to ground control. Each terminal will contain its own orbit model and that of its counter terminal for orbit prediction and for precise computation of pointing direction. Due to the very narrow field of view of the communication laser beam, the orbit propagation calculation needs to be done with a very high accuracy. The European Space Agency is responsible for the operation of both terminals. A PASTEL Mission Control System (PMCS) is being developed to control the PASTEL terminal on board SPOT4. The PMCS will interface with the SPOT4 Control Centre for the execution of the PASTEL operations. The PMCS will also interface with the ARTEMIS Mission Control System for the planning and the coordination of the operation. It is the first time that laser technology will be used to support

  13. The SILEX experiment system operations

    NASA Technical Reports Server (NTRS)

    Demelenne, B.

    1994-01-01

    The European Space Agency is going to conduct an inter orbit link experiment which will connect a low Earth orbiting satellite and a Geostationary satellite via optical terminals. This experiment has been called SILEX (Semiconductor Inter satellite Link EXperiment). Two payloads will be built. One called PASTEL (PASsager de TELecommunication) will be embarked on the French Earth observation satellite SPOT4. The future European experimental data relay satellite ARTEMIS (Advanced Relay and TEchnology MISsion) will carry the OPALE terminal (Optical PAyload Experiment). The principal characteristic of the mission is a 50 Megabits flow of data transmitted via the optical satellite link. The relay satellite will route the data via its feeder link thus permitting a real time reception in the European region of images taken by the observation satellite. The PASTEL terminal has been designed to cover up to 9 communication sessions per day with an average of 5. The number of daily contact opportunities with the low earth orbiting satellite will be increased and the duration will be much longer than the traditional passes over a ground station. The terminals have an autonomy of 24 hours with respect to ground control. Each terminal will contain its own orbit model and that of its counter terminal for orbit prediction and for precise computation of pointing direction. Due to the very narrow field of view of the communication laser beam, the orbit propagation calculation needs to be done with a very high accuracy. The European Space Agency is responsible for the operation of both terminals. A PASTEL Mission Control System (PMCS) is being developed to control the PASTEL terminal on board SPOT4. The PMCS will interface with the SPOT4 Control Centre for the execution of the PASTEL operations. The PMCS will also interface with the ARTEMIS Mission Control System for the planning and the coordination of the operation. It is the first time that laser technology will be used to support

  14. Optical system for proximity operations in aerospace

    NASA Astrophysics Data System (ADS)

    Zhang, Yong-Liang; Liu, Xiao-Chun; Lu, Huan-Zhang

    2008-12-01

    Satellite serving offers a potential for extending the life of satellites and reducing launching and operating costs. Proximity operations are necessary for purpose of satellite serving. As the primary measurement system, optical system can provide the information of relative navigation in near field. The paper has two main contributions. Firstly, we summarize use of optical systems for guidance and navigation in the missions of proximity operations in aerospace. Their characteristics vary from the manned missions, which are performed by astronauts on-orbit, through semi-autonomous missions, wherein human operators on the ground segment issue high level directives and sensor-guided systems on the space segment guide the execution, to the full-autonomous missions, which are executed using unmanned space robotic systems. It comes to light that future space operations will be more autonomous. Secondly, we present a concept and framework of a vision system for satellite proximity operations, which is semi-autonomous and can treat with uncooperative satellites. The vision system uses visible and infrared sensors synchronously to acquire images, which solves the problem of data integrity introduced by ambient illumination and direct sunlight for visible sensor. The vision system uses natural features on the satellite surfaces instead of artificial markers for its operation, computes relative motion and structure of the target, and tracks features in image sequences. Selected algorithms of the system have been characterized in ground environment; they are undergoing systematic sets of adaptation for space.

  15. Computer systems and software engineering

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  16. New computing systems and their impact on computational mechanics

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1989-01-01

    Recent advances in computer technology that are likely to impact computational mechanics are reviewed. The technical needs for computational mechanics technology are outlined. The major features of new and projected computing systems, including supersystems, parallel processing machines, special-purpose computing hardware, and small systems are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism on multiprocessor computers with a shared memory.

  17. Evaluating operating system vulnerability to memory errors.

    SciTech Connect

    Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  18. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  19. Automatic system for computer program documentation

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.; Elliott, R. W.; Arseven, S.; Colunga, D.

    1972-01-01

    Work done on a project to design an automatic system for computer program documentation aids was made to determine what existing programs could be used effectively to document computer programs. Results of the study are included in the form of an extensive bibliography and working papers on appropriate operating systems, text editors, program editors, data structures, standards, decision tables, flowchart systems, and proprietary documentation aids. The preliminary design for an automated documentation system is also included. An actual program has been documented in detail to demonstrate the types of output that can be produced by the proposed system.

  20. A COMPUTERIZED OPERATOR SUPPORT SYSTEM PROTOTYPE

    SciTech Connect

    Thomas A. Ulrich; Roger Lew; Ronald L. Boring; Ken Thomas

    2015-03-01

    A computerized operator support system (COSS) is proposed for use in nuclear power plants to assist control room operators in addressing time-critical plant upsets. A COSS is a collection of technologies to assist operators in monitoring overall plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. A prototype COSS was developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based on four underlying elements consisting of a digital alarm system, computer-based procedures, piping and instrumentation diagram system representations, and a recommender module for mitigation actions. The initial version of the prototype is now operational at the Idaho National Laboratory using the Human System Simulation Laboratory.

  1. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  2. The Saguaro distributed operating system

    NASA Astrophysics Data System (ADS)

    Andrews, Gregory R.; Schlichting, Richard D.

    1989-05-01

    The progress achieved over the final year of the Saguaro distributed operating system project is presented. The primary achievements were in related research, including SR distributed programming language, the MLP system for constructing distributed mixed-language programs, the Psync interprocess communication mechanism, a configurable operating system kernal called the x-kernal, and the development of language mechanisms for performing failure handling in distributed programming languages.

  3. Network operating system focus technology

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An activity structured to provide specific design requirements and specifications for the Space Station Data Management System (DMS) Network Operating System (NOS) is outlined. Examples are given of the types of supporting studies and implementation tasks presently underway to realize a DMS test bed capability to develop hands-on understanding of NOS requirements as driven by actual subsystem test beds participating in the overall Johnson Space Center test bed program. Classical operating system elements and principal NOS functions are listed.

  4. Computer Security Systems Enable Access.

    ERIC Educational Resources Information Center

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  5. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1973-01-01

    The TENEX computer system, the ARPA network, and computer language design technology was applied to support the complex system programs. By combining the pragmatic and theoretical aspects of robot development, an approach is created which is grounded in realism, but which also has at its disposal the power that comes from looking at complex problems from an abstract analytical point of view.

  6. Managing secure computer systems and networks.

    PubMed

    Von Solms, B

    1996-10-01

    No computer system or computer network can today be operated without the necessary security measures to secure and protect the electronic assets stored, processed and transmitted using such systems and networks. Very often the effort in managing such security and protection measures are totally underestimated. This paper provides an overview of the security management needed to secure and protect a typical IT system and network. Special reference is made to this management effort in healthcare systems, and the role of the information security officer is also highlighted.

  7. Computer interface system

    NASA Technical Reports Server (NTRS)

    Anderson, T. O. (Inventor)

    1976-01-01

    An interface logic circuit permitting the transfer of information between two computers having asynchronous clocks is disclosed. The information transfer involves utilization of control signals (including request, return-response, ready) to generate properly timed data strobe signals. Noise problems are avoided because each control signal, upon receipt, is verified by at least two clock pulses at the receiving computer. If control signals are verified, a data strobe pulse is generated to accomplish a data transfer. Once initiated, the data strobe signal is properly completed independently of signal disturbances in the control signal initiating the data strobe signal. Completion of the data strobe signal is announced by automatic turn-off of a return-response control signal.

  8. Operating System For Numerically Controlled Milling Machine

    NASA Technical Reports Server (NTRS)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  9. Executing a gather operation on a parallel computer

    DOEpatents

    Archer, Charles J.; Ratterman, Joseph D.

    2012-03-20

    Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network.

  10. On the operating point of cortical computation

    NASA Astrophysics Data System (ADS)

    Martin, Robert; Stimberg, Marcel; Wimmer, Klaus; Obermayer, Klaus

    2010-06-01

    In this paper, we consider a class of network models of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map, as found in primary visual cortex (V1). We systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input in order to characterize different operating regimes of the network. We then compare the map-location dependence of the tuning in the networks with different parametrizations with the neuronal tuning measured in cat V1 in vivo. By considering the tuning of neuronal dynamic and state variables, conductances and membrane potential respectively, our quantitative analysis is able to constrain the operating regime of V1: The data provide strong evidence for a network, in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition, operating in vivo. Interestingly, this recurrent regime is close to a regime of "instability", characterized by strong, self-sustained activity. The firing rate of neurons in the best-fitting model network is therefore particularly sensitive to small modulations of model parameters, possibly one of the functional benefits of this particular operating regime.

  11. Redefining Tactical Operations for MER Using Cloud Computing

    NASA Technical Reports Server (NTRS)

    Joswig, Joseph C.; Shams, Khawaja S.

    2011-01-01

    The Mars Exploration Rover Mission (MER) includes the twin rovers, Spirit and Opportunity, which have been performing geological research and surface exploration since early 2004. The rovers' durability well beyond their original prime mission (90 sols or Martian days) has allowed them to be a valuable platform for scientific research for well over 2000 sols, but as a by-product it has produced new challenges in providing efficient and cost-effective tactical operational planning. An early stage process adaptation was the move to distributed operations as mission scientists returned to their places of work in the summer of 2004, but they would still came together via teleconference and connected software to plan rover activities a few times a week. This distributed model has worked well since, but it requires the purchase, operation, and maintenance of a dedicated infrastructure at the Jet Propulsion Laboratory. This server infrastructure is costly to operate and the periodic nature of its usage (typically heavy usage for 8 hours every 2 days) has made moving to a cloud based tactical infrastructure an extremely tempting proposition. In this paper we will review both past and current implementations of the tactical planning application focusing on remote plan saving and discuss the unique challenges present with long-latency, distributed operations. We then detail the motivations behind our move to cloud based computing services and as well as our system design and implementation. We will discuss security and reliability concerns and how they were addressed

  12. Determining collective barrier operation skew in a parallel computer

    SciTech Connect

    Faraj, Daniel A.

    2015-11-24

    Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

  13. Refurbishment program of HANARO control computer system

    SciTech Connect

    Kim, H. K.; Choe, Y. S.; Lee, M. W.; Doo, S. K.; Jung, H. S.

    2012-07-01

    HANARO, an open-tank-in-pool type research reactor with 30 MW thermal power, achieved its first criticality in 1995. The programmable controller system MLC (Multi Loop Controller) manufactured by MOORE has been used to control and regulate HANARO since 1995. We made a plan to replace the control computer because the system supplier no longer provided technical support and thus no spare parts were available. Aged and obsolete equipment and the shortage of spare parts supply could have caused great problems. The first consideration for a replacement of the control computer dates back to 2007. The supplier did not produce the components of MLC so that this system would no longer be guaranteed. We established the upgrade and refurbishment program in 2009 so as to keep HANARO up to date in terms of safety. We designed the new control computer system that would replace MLC. The new computer system is HCCS (HANARO Control Computer System). The refurbishing activity is in progress and will finish in 2013. The goal of the refurbishment program is a functional replacement of the reactor control system in consideration of suitable interfaces, compliance with no special outage for installation and commissioning, and no change of the well-proved operation philosophy. HCCS is a DCS (Discrete Control System) using PLC manufactured by RTP. To enhance the reliability, we adapt a triple processor system, double I/O system and hot swapping function. This paper describes the refurbishment program of the HANARO control system including the design requirements of HCCS. (authors)

  14. Launch systems operations cost modeling

    NASA Astrophysics Data System (ADS)

    Jacobs, Mark K.

    1999-01-01

    This paper describes the launch systems operations modeling portion of a larger model development effort, NASA's Space Operations Cost Model (SOCM), led by NASA HQ. The SOCM study team, which includes cost and technical experts from each NASA Field Center and various contractors, has been tasked to model operations costs for all future NASA mission concepts including planetary and Earth orbiting science missions, space facilities, and launch systems. The launch systems operations modeling effort has near term significance for assessing affordability of our next generation launch vehicles and directing technology investments, although it provides only a part of the necessary inputs to assess life cycle costs for all elements that determine affordability for a launch system. Presented here is a methodology to estimate requirements associated with a launch facility infrastructure, or Spaceport, from start-up/initialization into steady-state operation. Included are descriptions of the reference data used, the unique estimating methodology that combines cost lookup tables, parametric relationships, and constructively-developed correlations of cost driver input values to collected reference data, and the output categories that can be used by economic and market models. Also, future plans to improve integration of launch vehicle development cost models, reliability and maintainability models, economic and market models, and this operations model to facilitate overall launch system life cycle performance simulations will be presented.

  15. Choosing the right computer system.

    PubMed

    Freydberg, B K; Seltzer, S M; Walker, B

    1999-08-01

    We are living in a world where virtually any information you desire can be acquired in a matter of moments with the click of a mouse. The computer is a ubiquitous fixture in elementary schools, universities, small companies, large companies, and homes. Many dental offices have incorporated computers as an integral part of their management systems. However, the role of the computer is expanding in the dental office as new hardware and software advancements emerge. The growing popularity of digital radiography and photography is making the possibility of a completely digital patient record more desirable. The trend for expanding the role of dental office computer systems is reflected in the increased number of companies that offer computer packages. The purchase of one of these new systems represents a significant commitment on the part of the dentist and staff. Not only do the systems have a substantial price tag, but they require a great deal of time and effort to become fully integrated into the daily office routine. To help the reader gain some clarity on the blur of new hardware and software available, I have enlisted the help of three recognized authorities on the subject of office organization and computer systems. This article is not intended to provide a ranking of features and shortcomings of specific products that are available, but rather to present a process by which the reader might be able to make better choices when selecting or upgrading a computer system.

  16. Students "Hacking" School Computer Systems

    ERIC Educational Resources Information Center

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  17. Lewis hybrid computing system, users manual

    NASA Technical Reports Server (NTRS)

    Bruton, W. M.; Cwynar, D. S.

    1979-01-01

    The Lewis Research Center's Hybrid Simulation Lab contains a collection of analog, digital, and hybrid (combined analog and digital) computing equipment suitable for the dynamic simulation and analysis of complex systems. This report is intended as a guide to users of these computing systems. The report describes the available equipment' and outlines procedures for its use. Particular is given to the operation of the PACER 100 digital processor. System software to accomplish the usual digital tasks such as compiling, editing, etc. and Lewis-developed special purpose software are described.

  18. Redesigning the District Operating System

    ERIC Educational Resources Information Center

    Hodas, Steven

    2015-01-01

    In this paper, we look at the inner workings of a school district through the lens of the "district operating system (DOS)," a set of interlocking mutually-reinforcing modules that includes functions like procurement, contracting, data and IT policy, the general counsel's office, human resources, and the systems for employee and family…

  19. Computer system design (supermicrocomputers)

    SciTech Connect

    Warren, C.

    1983-05-26

    The main architectural differences between conventional microcomputer systems and supermicrocomputers are the following features which the latter possess: specialised bus for interprocessor communication; two or more processors, ranging from 8-bit to 48-bit-slice designs; and fast bus designs which permit data transfers by the byte or by the word. The majority of supermicrocomputers are 16-bit or 32-bit multiuser, multitasking systems able to address large amounts of physical and virtual memory. Current developments in supermicrocomputers are discussed with reference to a variety of available machines.

  20. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  1. User computer system pilot project

    SciTech Connect

    Eimutis, E.C.

    1989-09-06

    The User Computer System (UCS) is a general purpose unclassified, nonproduction system for Mound users. The UCS pilot project was successfully completed, and the system currently has more than 250 users. Over 100 tables were installed on the UCS for use by subscribers, including tables containing data on employees, budgets, and purchasing. In addition, a UCS training course was developed and implemented.

  2. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next

  3. UFO (UnFold Operator) computer program abstract

    SciTech Connect

    Kissel, L.; Biggs, F.

    1982-11-01

    UFO (UnFold Operator) is an interactive user-oriented computer program designed to solve a wide range of problems commonly encountered in physical measurements. This document provides a summary of the capabilities of version 3A of UFO.

  4. NIF Integrated Computer Controls System Description

    SciTech Connect

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  5. Computer vision applied to vehicle operation

    SciTech Connect

    Metzler, H.G.

    1988-01-01

    Among many tasks of car development, safety, economy, environmental benefits and convenience, safety should have a high priority. One of the main goals is the reduction of the number of accidents. Environment and situation recognition by autonomous vehicle-electronic systems can contribute to the recognition of problems together with information to the driver or direct intervention in the car's behaviour. This paper describes some techniques for environment recognition, the status of a present project, and the goals of some PROMETHEUS (Program for a European Traffic with Highest Efficiency and Unprecedented Safety) projects.

  6. Telemetry Computer System at Wallops Flight Center

    NASA Technical Reports Server (NTRS)

    Bell, H.; Strock, J.

    1980-01-01

    This paper describes the Telemetry Computer System in operation at NASA's Wallops Flight Center for real-time or off-line processing, storage, and display of telemetry data from rockets and aircraft. The system accepts one or two PCM data streams and one FM multiplex, converting each type of data into computer format and merging time-of-day information. A data compressor merges the active streams, and removes redundant data if desired. Dual minicomputers process data for display, while storing information on computer tape for further processing. Real-time displays are located at the station, at the rocket launch control center, and in the aircraft control tower. The system is set up and run by standard telemetry software under control of engineers and technicians. Expansion capability is built into the system to take care of possible future requirements.

  7. Design and Effectiveness of Intelligent Tutors for Operators of Complex Dynamic Systems: A Tutor Implementation for Satellite System Operators.

    ERIC Educational Resources Information Center

    Mitchell, Christine M.; Govindaraj, T.

    1990-01-01

    Discusses the use of intelligent tutoring systems as opposed to traditional on-the-job training for training operators of complex dynamic systems and describes the computer architecture for a system for operators of a NASA (National Aeronautics and Space Administration) satellite control system. An experimental evaluation with college students is…

  8. Advanced Transport Operating Systems Program

    NASA Technical Reports Server (NTRS)

    White, John J.

    1990-01-01

    NASA-Langley's Advanced Transport Operating Systems Program employs a heavily instrumented, B 737-100 as its Transport Systems Research Vehicle (TRSV). The TRSV has been used during the demonstration trials of the Time Reference Scanning Beam Microwave Landing System (TRSB MLS), the '4D flight-management' concept, ATC data links, and airborne windshear sensors. The credibility obtainable from successful flight test experiments is often a critical factor in the granting of substantial commitments for commercial implementation by the FAA and industry. In the case of the TRSB MLS, flight test demonstrations were decisive to its selection as the standard landing system by the ICAO.

  9. Operation of large cryogenic systems

    SciTech Connect

    Rode, C.H.; Ferry, B.; Fowler, W.B.; Makara, J.; Peterson, T.; Theilacker, J.; Walker, R.

    1985-06-01

    This report is based on the past 12 years of experiments on R and D and operation of the 27 kW Fermilab Tevatron Cryogenic System. In general the comments are applicable for all helium plants larger than 1000W (400 l/hr) and non mass-produced nitrogen plants larger than 50 tons per day. 14 refs., 3 figs., 1 tab.

  10. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  11. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  12. Transforming Power Grid Operations via High Performance Computing

    SciTech Connect

    Huang, Zhenyu; Nieplocha, Jarek

    2008-07-31

    Past power grid blackout events revealed the adequacy of grid operations in responding to adverse situations partially due to low computational efficiency in grid operation functions. High performance computing (HPC) provides a promising solution to this problem. HPC applications in power grid computation also become necessary to take advantage of parallel computing platforms as the computer industry is undergoing a significant change from the traditional single-processor environment to an era for multi-processor computing platforms. HPC applications to power grid operations are multi-fold. HPC can improve today’s grid operation functions like state estimation and contingency analysis and reduce the solution time from minutes to seconds, comparable to SCADA measurement cycles. HPC also enables the integration of dynamic analysis into real-time grid operations. Dynamic state estimation, look-ahead dynamic simulation and real-time dynamic contingency analysis can be implemented and would be three key dynamic functions in future control centers. HPC applications call for better decision support tools, which also need HPC support to handle large volume of data and large number of cases. Given the complexity of the grid and the sheer number of possible configurations, HPC is considered to be an indispensible element in the next generation control centers.

  13. Computer systems for automatic earthquake detection

    USGS Publications Warehouse

    Stewart, S.W.

    1974-01-01

    U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously. 

  14. The Advanced Technology Operations System: ATOS

    NASA Technical Reports Server (NTRS)

    Kaufeler, J.-F.; Laue, H. A.; Poulter, K.; Smith, H.

    1993-01-01

    Mission control systems supporting new space missions face ever-increasing requirements in terms of functionality, performance, reliability and efficiency. Modern data processing technology is providing the means to meet these requirements in new systems under development. During the past few years the European Space Operations Centre (ESOC) of the European Space Agency (ESA) has carried out a number of projects to demonstrate the feasibility of using advanced software technology, in particular, knowledge based systems, to support mission operations. A number of advances must be achieved before these techniques can be moved towards operational use in future missions, namely, integration of the applications into a single system framework and generalization of the applications so that they are mission independent. In order to achieve this goal, ESA initiated the Advanced Technology Operations System (ATOS) program, which will develop the infrastructure to support advanced software technology in mission operations, and provide applications modules to initially support: Mission Preparation, Mission Planning, Computer Assisted Operations, and Advanced Training. The first phase of the ATOS program is tasked with the goal of designing and prototyping the necessary system infrastructure to support the rest of the program. The major components of the ATOS architecture is presented. This architecture relies on the concept of a Mission Information Base (MIB) as the repository for all information and knowledge which will be used by the advanced application modules in future mission control systems. The MIB is being designed to exploit the latest in database and knowledge representation technology in an open and distributed system. In conclusion the technological and implementation challenges expected to be encountered, as well as the future plans and time scale of the project, are presented.

  15. Basic Operational Robotics Instructional System

    NASA Technical Reports Server (NTRS)

    Todd, Brian Keith; Fischer, James; Falgout, Jane; Schweers, John

    2013-01-01

    The Basic Operational Robotics Instructional System (BORIS) is a six-degree-of-freedom rotational robotic manipulator system simulation used for training of fundamental robotics concepts, with in-line shoulder, offset elbow, and offset wrist. BORIS is used to provide generic robotics training to aerospace professionals including flight crews, flight controllers, and robotics instructors. It uses forward kinematic and inverse kinematic algorithms to simulate joint and end-effector motion, combined with a multibody dynamics model, moving-object contact model, and X-Windows based graphical user interfaces, coordinated in the Trick Simulation modeling environment. The motivation for development of BORIS was the need for a generic system for basic robotics training. Before BORIS, introductory robotics training was done with either the SRMS (Shuttle Remote Manipulator System) or SSRMS (Space Station Remote Manipulator System) simulations. The unique construction of each of these systems required some specialized training that distracted students from the ideas and goals of the basic robotics instruction.

  16. ADAMS executive and operating system

    NASA Technical Reports Server (NTRS)

    Pittman, W. D.

    1981-01-01

    The ADAMS Executive and Operating System, a multitasking environment under which a variety of data reduction, display and utility programs are executed, a system which provides a high level of isolation between programs allowing them to be developed and modified independently, is described. The Airborne Data Analysis/Monitor System (ADAMS) was developed to provide a real time data monitoring and analysis capability onboard Boeing commercial airplanes during flight testing. It inputs sensor data from an airplane performance data by applying transforms to the collected sensor data, and presents this data to test personnel via various display media. Current utilization and future development are addressed.

  17. SIRTF Science Operations System Design

    NASA Technical Reports Server (NTRS)

    Green, William

    1999-01-01

    SIRTF Science Operations System Design William B. Green Manager, SIRTF Science Center California Institute of Technology M/S 310-6 1200 E. California Blvd., Pasadena CA 91125 (626) 395 8572 Fax (626) 568 0673 bgreen@ipac.caltech.edu. The Space Infrared Telescope Facility (SIRTF) will be launched in December 2001, and perform an extended series of science observations at wavelengths ranging from 20 to 160 microns for five years or more. The California Institute of Technology has been selected as the home for the SIRTF Science Center (SSC). The SSC will be responsible for evaluating and selecting observation proposals, providing technical support to the science community, performing mission planning and science observation scheduling activities, instrument calibration during operations and instrument health monitoring, production of archival quality data products, and management of science research grants. The science payload consists of three instruments delivered by instrument Principal Investigators located at University of Arizona, Cornell, and Harvard Smithsonian Astrophysical Observatory. The SSC is responsible for design, development, and operation of the Science Operations System (SOS) which will support the functions assigned to the SSC by NASA. The SIRTF spacecraft, mission profile, and science instrument design have undergone almost ten years of refinement. SIRTF development and operations activities are highly cost constrained. The cost constraints have impacted the design of the SOS in several ways. The Science Operations System has been designed to incorporate a set of efficient, easy to use tools which will make it possible for scientists to propose observation sequences in a rapid and automated manner. The use of highly automated tools for requesting observations will simplify the long range observatory scheduling process, and the short term scheduling of science observations. Pipeline data processing will be highly automated and data

  18. EOS Operations Systems: EDOS Implemented Changes to Reduce Operations Costs

    NASA Technical Reports Server (NTRS)

    Cordier, Guy R.; Gomez-Rosa, Carlos; McLemore, Bruce D.

    2007-01-01

    The authors describe in this paper the progress achieved to-date with the reengineering of the Earth Observing System (EOS) Data and Operations System (EDOS), the experience gained in the process and the ensuing reduction of ground systems operations costs. The reengineering effort included a major methodology change, applying to an existing schedule driven system, a data-driven system approach.

  19. Adaptive Fuzzy Systems in Computational Intelligence

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1996-01-01

    In recent years, the interest in computational intelligence techniques, which currently includes neural networks, fuzzy systems, and evolutionary programming, has grown significantly and a number of their applications have been developed in the government and industry. In future, an essential element in these systems will be fuzzy systems that can learn from experience by using neural network in refining their performances. The GARIC architecture, introduced earlier, is an example of a fuzzy reinforcement learning system which has been applied in several control domains such as cart-pole balancing, simulation of to Space Shuttle orbital operations, and tether control. A number of examples from GARIC's applications in these domains will be demonstrated.

  20. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated...

  1. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated...

  2. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated...

  3. Method and apparatus of parallel computing with simultaneously operating stream prefetching and list prefetching engines

    DOEpatents

    Boyle, Peter A.; Christ, Norman H.; Gara, Alan; Mawhinney, Robert D.; Ohmacht, Martin; Sugavanam, Krishnan

    2012-12-11

    A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command.

  4. Reproducibility of neuroimaging analyses across operating systems

    PubMed Central

    Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757

  5. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  6. EBR-II Cover Gas Cleanup System upgrade distributed control and front end computer systems

    SciTech Connect

    Carlson, R.B.

    1992-05-01

    The Experimental Breeder Reactor II (EBR-II) Cover Gas Cleanup System (CGCS) control system was upgraded in 1991 to improve control and provide a graphical operator interface. The upgrade consisted of a main control computer, a distributed control computer, a front end input/output computer, a main graphics interface terminal, and a remote graphics interface terminal. This paper briefly describes the Cover Gas Cleanup System and the overall control system; gives reasons behind the computer system structure; and then gives a detailed description of the distributed control computer, the front end computer, and how these computers interact with the main control computer. The descriptions cover both hardware and software.

  7. EBR-II Cover Gas Cleanup System upgrade distributed control and front end computer systems

    SciTech Connect

    Carlson, R.B.

    1992-01-01

    The Experimental Breeder Reactor II (EBR-II) Cover Gas Cleanup System (CGCS) control system was upgraded in 1991 to improve control and provide a graphical operator interface. The upgrade consisted of a main control computer, a distributed control computer, a front end input/output computer, a main graphics interface terminal, and a remote graphics interface terminal. This paper briefly describes the Cover Gas Cleanup System and the overall control system; gives reasons behind the computer system structure; and then gives a detailed description of the distributed control computer, the front end computer, and how these computers interact with the main control computer. The descriptions cover both hardware and software.

  8. Approach to constructing reconfigurable computer vision system

    NASA Astrophysics Data System (ADS)

    Xue, Jianru; Zheng, Nanning; Wang, Xiaoling; Zhang, Yongping

    2000-10-01

    In this paper, we propose an approach to constructing reconfigurable vision system. We found that timely and efficient execution of early tasks can significantly enhance the performance of whole computer vision tasks, and abstract out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in a series of specific front-end processors. These processors which based on FPGAs (Field programmable gate arrays) can be re-programmable to permit a range of different types of feature maps, such as edge detection & linking, image filtering. Front-end processors and a powerful DSP constitute a computing platform which can perform many CV tasks. Additionally we adopt the focus-of-attention technologies to reduce the I/O and computational demands by performing early vision processing only within a particular region of interest. Then we implement a multi-page, dual-ported image memory interface between the image input and computing platform (including front-end processors, DSP). Early vision features were loaded into banks of dual-ported image memory arrays, which are continually raster scan updated at high speed from the input image or video data stream. Moreover, the computing platform can be complete asynchronous, random access to the image data or any other early vision feature maps through the dual-ported memory banks. In this way, the computing platform resources can be properly allocated to a region of interest and decoupled from the task of dealing with a high speed serial raster scan input. Finally, we choose PCI Bus as the main channel between the PC and computing platform. Consequently, front-end processors' control registers and DSP's program memory were mapped into the PC's memory space, which provides user access to reconfigure the system at any time. We also present test result of a computer vision application based on the system.

  9. Spaceborne application multiprocessor operating system

    NASA Astrophysics Data System (ADS)

    Grisbeck, Gary S.; Webber, Wesley D.

    1992-03-01

    The Operational Kernel (OK) system for the Spaceborne Processor Array-1 (SPA-1) software development environment is described. The OK system demonstration featured fully autonomous onboard control of data movement, fault detection, fault isolation, hardware reconfiguration, application restart, and load balancing. Random nodal or processing hardware was caused to fail by selection of switches on a fault injection panel. The SPA-1 based on the OK written in Ada detected that a failure had occurred, isolated it, redistributed the processing load, and continued with the application processing.

  10. Aging and computational systems biology.

    PubMed

    Mooney, Kathleen M; Morgan, Amy E; Mc Auley, Mark T

    2016-01-01

    Aging research is undergoing a paradigm shift, which has led to new and innovative methods of exploring this complex phenomenon. The systems biology approach endeavors to understand biological systems in a holistic manner, by taking account of intrinsic interactions, while also attempting to account for the impact of external inputs, such as diet. A key technique employed in systems biology is computational modeling, which involves mathematically describing and simulating the dynamics of biological systems. Although a large number of computational models have been developed in recent years, these models have focused on various discrete components of the aging process, and to date no model has succeeded in completely representing the full scope of aging. Combining existing models or developing new models may help to address this need and in so doing could help achieve an improved understanding of the intrinsic mechanisms which underpin aging.

  11. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.

  12. Policy Information System Computer Program.

    ERIC Educational Resources Information Center

    Hamlin, Roger E.; And Others

    The concepts and methodologies outlined in "A Policy Information System for Vocational Education" are presented in a simple computer format in this booklet. It also contains a sample output representing 5-year projections of various planning needs for vocational education. Computerized figures in the eight areas corresponding to those in the…

  13. Computational Systems for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Soni, Bharat; Haupt, Tomasz; Koomullil, Roy; Luke, Edward; Thompson, David

    2002-01-01

    In this paper, we briefly describe our efforts to develop complex simulation systems. We focus first on four key infrastructure items: enterprise computational services, simulation synthesis, geometry modeling and mesh generation, and a fluid flow solver for arbitrary meshes. We conclude by presenting three diverse applications developed using these technologies.

  14. Towards molecular computers that operate in a biological environment

    NASA Astrophysics Data System (ADS)

    Kahan, Maya; Gil, Binyamin; Adar, Rivka; Shapiro, Ehud

    2008-07-01

    important consequences when performed in a proper context. We envision that molecular computers that operate in a biological environment can be the basis of “smart drugs”, which are potent drugs that activate only if certain environmental conditions hold. These conditions could include abnormalities in the molecular composition of the biological environment that are indicative of a particular disease. Here we review the research direction that set this vision and attempts to realize it.

  15. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    The development of a computer problem solving system is reported that considers physical problems faced by an artificial robot moving around in a complex environment. Fundamental interaction constraints with a real environment are simulated for the robot by visual scan and creation of an internal environmental model. The programming system used in constructing the problem solving system for the simulated robot and its simulated world environment is outlined together with the task that the system is capable of performing. A very general framework for understanding the relationship between an observed behavior and an adequate description of that behavior is included.

  16. Advanced Autonomous Systems for Space Operations

    NASA Astrophysics Data System (ADS)

    Gross, A. R.; Smith, B. D.; Muscettola, N.; Barrett, A.; Mjolssness, E.; Clancy, D. J.

    2002-01-01

    New missions of exploration and space operations will require unprecedented levels of autonomy to successfully accomplish their objectives. Inherently high levels of complexity, cost, and communication distances will preclude the degree of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of not only meeting the greatly increased space exploration requirements, but simultaneously dramatically reducing the design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health management capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of advanced space operations, since the science and operational requirements specified by such missions, as well as the budgetary constraints will limit the current practice of monitoring and controlling missions by a standing army of ground-based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such on-board systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communication` distances as are not

  17. VAXIMA. Computer Algebra System Under UNIX

    SciTech Connect

    Fateman, R.

    1992-03-16

    VAXIMA, derived from Project MAC`s SYmbolic MAnipulation system MACSYMA, is a large computer programming system written in LISP, used for performing symbolic as well as numerical mathematical manipulations. With VAXIMA, the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations with direct or transform methods, compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own program for transforming symbolic expressions. Franz Lisp OPUS 38 provides the environment for VAXIMA`s development and use on the DEC VAX11 executing under the Berkeley UNIX Release 4.2 operating system. An executable version of Lisp (the Lisp interpreter) and Liszt (the Lisp compiler) as well as the complete documentation files are included.

  18. Top 10 Threats to Computer Systems Include Professors and Students

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  19. Transient upset models in computer systems

    NASA Technical Reports Server (NTRS)

    Mason, G. M.

    1983-01-01

    Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.

  20. Computer-controlled radiation monitoring system

    SciTech Connect

    Homann, S.G.

    1994-09-27

    A computer-controlled radiation monitoring system was designed and installed at the Lawrence Livermore National Laboratory`s Multiuser Tandem Laboratory (10 MV tandem accelerator from High Voltage Engineering Corporation). The system continuously monitors the photon and neutron radiation environment associated with the facility and automatically suspends accelerator operation if preset radiation levels are exceeded. The system has proved reliable real-time radiation monitoring over the past five years, and has been a valuable tool for maintaining personnel exposure as low as reasonably achievable.

  1. National Ignition Facility integrated computer control system

    SciTech Connect

    Van Arsdall, P.J., LLNL

    1998-06-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  2. Operational Management System for Regulated Water Systems

    NASA Astrophysics Data System (ADS)

    van Loenen, A.; van Dijk, M.; van Verseveld, W.; Berger, H.

    2012-04-01

    Most of the Dutch large rivers, canals and lakes are controlled by the Dutch water authorities. The main reasons concern safety, navigation and fresh water supply. Historically the separate water bodies have been controlled locally. For optimizating management of these water systems an integrated approach was required. Presented is a platform which integrates data from all control objects for monitoring and control purposes. The Operational Management System for Regulated Water Systems (IWP) is an implementation of Delft-FEWS which supports operational control of water systems and actively gives advice. One of the main characteristics of IWP is that is real-time collects, transforms and presents different types of data, which all add to the operational water management. Next to that, hydrodynamic models and intelligent decision support tools are added to support the water managers during their daily control activities. An important advantage of IWP is that it uses the Delft-FEWS framework, therefore processes like central data collection, transformations, data processing and presentation are simply configured. At all control locations the same information is readily available. The operational water management itself gains from this information, but it can also contribute to cost efficiency (no unnecessary pumping), better use of available storage and advise during (water polution) calamities.

  3. A Computuerized Operator Support System Prototype

    SciTech Connect

    Ken Thomas; Ronald Boring; Roger Lew; Tom Ulrich; Richard Villim

    2013-11-01

    A report was published by the Idaho National Laboratory in September of 2012, entitled Design to Achieve Fault Tolerance and Resilience, which described the benefits of automating operator actions for transients. The report identified situations in which providing additional automation in lieu of operator actions would be advantageous. It recognized that managing certain plant upsets is sometimes limited by the operator’s ability to quickly diagnose the fault and to take the needed actions in the time available. Undoubtedly, technology is underutilized in the nuclear power industry for operator assistance during plant faults and operating transients. In contrast, other industry sectors have amply demonstrated that various forms of operator advisory systems can enhance operator performance while maintaining the role and responsibility of the operator as the independent and ultimate decision-maker. A computerized operator support system (COSS) is proposed for use in nuclear power plants to assist control room operators in addressing time-critical plant upsets. A COSS is a collection of technologies to assist operators in monitoring overall plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. The COSS does not supplant the role of the operator, but rather provides rapid assessments, computations, and recommendations to reduce workload and augment operator judgment and decision-making during fast-moving, complex events. This project proposes a general model for a control room COSS that addresses a sequence of general tasks required to manage any plant upset: detection, validation, diagnosis, recommendation, monitoring, and recovery. The model serves as a framework for assembling a set of technologies that can be interrelated to assist with each of these tasks. A prototype COSS has been developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based

  4. A Computuerized Operator Support System Prototype

    SciTech Connect

    Ken Thomas; Ronald Boring; Roger Lew; Tom Ulrich; Richard Villim

    2013-08-01

    A report was published by the Idaho National Laboratory in September of 2012, entitled Design to Achieve Fault Tolerance and Resilience, which described the benefits of automating operator actions for transients. The report identified situations in which providing additional automation in lieu of operator actions would be advantageous. It recognized that managing certain plant upsets is sometimes limited by the operator’s ability to quickly diagnose the fault and to take the needed actions in the time available. Undoubtedly, technology is underutilized in the nuclear power industry for operator assistance during plant faults and operating transients. In contrast, other industry sectors have amply demonstrated that various forms of operator advisory systems can enhance operator performance while maintaining the role and responsibility of the operator as the independent and ultimate decision-maker. A computerized operator support system (COSS) is proposed for use in nuclear power plants to assist control room operators in addressing time-critical plant upsets. A COSS is a collection of technologies to assist operators in monitoring overall plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. The COSS does not supplant the role of the operator, but rather provides rapid assessments, computations, and recommendations to reduce workload and augment operator judgment and decision-making during fast-moving, complex events. This project proposes a general model for a control room COSS that addresses a sequence of general tasks required to manage any plant upset: detection, validation, diagnosis, recommendation, monitoring, and recovery. The model serves as a framework for assembling a set of technologies that can be interrelated to assist with each of these tasks. A prototype COSS has been developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based

  5. Operating the Worldwide LHC Computing Grid: current and future challenges

    NASA Astrophysics Data System (ADS)

    Flix Molina, J.; Forti, A.; Girone, M.; Sciaba, A.

    2014-06-01

    The Wordwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse their data. It includes almost 200,000 CPU cores, 200 PB of disk storage and 200 PB of tape storage distributed among more than 150 sites. The WLCG operations team is responsible for several essential tasks, such as the coordination of testing and deployment of Grid middleware and services, communication with the experiments and the sites, followup and resolution of operational issues and medium/long term planning. In 2012 WLCG critically reviewed all operational procedures and restructured the organisation of the operations team as a more coherent effort in order to improve its efficiency. In this paper we describe how the new organisation works, its recent successes and the changes to be implemented during the long LHC shutdown in preparation for the LHC Run 2.

  6. Time Warp Operating System, Version 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.; Younger, Herbert C.

    1993-01-01

    Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.

  7. Redundant computing for exascale systems.

    SciTech Connect

    Stearley, Jon R.; Riesen, Rolf E.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Brightwell, Ronald Brian

    2010-12-01

    Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.

  8. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  9. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  10. Fault tolerant hypercube computer system architecture

    NASA Technical Reports Server (NTRS)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  11. Computer-aided system design

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  12. A space transportation system operations model

    NASA Technical Reports Server (NTRS)

    Morris, W. Douglas; White, Nancy H.

    1987-01-01

    Presented is a description of a computer program which permits assessment of the operational support requirements of space transportation systems functioning in both a ground- and space-based environment. The scenario depicted provides for the delivery of payloads from Earth to a space station and beyond using upper stages based at the station. Model results are scenario dependent and rely on the input definitions of delivery requirements, task times, and available resources. Output is in terms of flight rate capabilities, resource requirements, and facility utilization. A general program description, program listing, input requirements, and sample output are included.

  13. Program MASTERCALC: an interactive computer program for radioanalytical computations. Description and operating instructions

    SciTech Connect

    Goode, W.

    1980-10-01

    MASTERCALC is a computer program written to support radioanalytical computations in the Los Alamos Scientific Laboratory (LASL) Environmental Surveillance Group. Included in the program are routines for gross alpha and beta, /sup 3/H, gross gamma, /sup 90/Sr and alpha spectroscopic determinations. A description of MASTERCALC is presented and its source listing is included. Operating instructions and example computing sessions are given for each type of analysis.

  14. Mathemagical Computing: Order of Operations and New Software.

    ERIC Educational Resources Information Center

    Ecker, Michael W.

    1989-01-01

    Describes mathematical problems which occur when using the computer as a calculator. Considers errors in BASIC calculation and the order of mathematical operations. Identifies errors in spreadsheet and calculator programs. Comments on sorting programs and provides a source for Mathemagical Black Holes. (MVL)

  15. Computer Pure-Tone and Operator Stress: Report III.

    ERIC Educational Resources Information Center

    Dow, Caroline; Covert, Douglas C.

    Pure-tone sound at 15,750 Herz generated by flyback transformers in many computer and video display terminal (VDT) monitors has stress-related productivity effects in some operators, especially women. College-age women in a controlled experiment simulating half a normal work day showed responses within the first half hour of exposure to a tone…

  16. On evaluating parallel computer systems

    NASA Technical Reports Server (NTRS)

    Adams, George B., III; Brown, Robert L.; Denning, Peter J.

    1985-01-01

    A workshop was held in an attempt to program real problems on the MIT Static Data Flow Machine. Most of the architecture of the machine was specified but some parts were incomplete. The main purpose for the workshop was to explore principles for the evaluation of computer systems employing new architectures. Principles explored were: (1) evaluation must be an integral, ongoing part of a project to develop a computer of radically new architecture; (2) the evaluation should seek to measure the usability of the system as well as its performance; (3) users from the application domains must be an integral part of the evaluation process; and (4) evaluation results should be fed back into the design process. It is concluded that the general organizational principles are achievable in practice from this workshop.

  17. Thermal Hydraulic Computer Code System.

    1999-07-16

    Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less

  18. Distributed Storage Systems for Data Intensive Computing

    SciTech Connect

    Vazhkudai, Sudharshan S; Butt, Ali R; Ma, Xiaosong

    2012-01-01

    In this chapter, the authors present an overview of the utility of distributed storage systems in supporting modern applications that are increasingly becoming data intensive. Their coverage of distributed storage systems is based on the requirements imposed by data intensive computing and not a mere summary of storage systems. To this end, they delve into several aspects of supporting data-intensive analysis, such as data staging, offloading, checkpointing, and end-user access to terabytes of data, and illustrate the use of novel techniques and methodologies for realizing distributed storage systems therein. The data deluge from scientific experiments, observations, and simulations is affecting all of the aforementioned day-to-day operations in data-intensive computing. Modern distributed storage systems employ techniques that can help improve application performance, alleviate I/O bandwidth bottleneck, mask failures, and improve data availability. They present key guiding principles involved in the construction of such storage systems, associated tradeoffs, design, and architecture, all with an eye toward addressing challenges of data-intensive scientific applications. They highlight the concepts involved using several case studies of state-of-the-art storage systems that are currently available in the data-intensive computing landscape.

  19. Operation plan for the data 100/LARS terminal system

    NASA Technical Reports Server (NTRS)

    Bowen, A. J., Jr.

    1980-01-01

    The Data 100/LARS terminal system provides an interface for processing on the IBM 3031 computer system at Purdue University's Laboratory for Applications of Remote Sensing. The environment in which the system is operated and supported is discussed. The general support responsibilities, procedural mechanisms, and training established for the benefit of the system users are defined.

  20. Man-Computer Interactive Data Access System (McIDAS). Continued development of McIDAS and operation in the GARP Atlantic tropical experiment

    NASA Technical Reports Server (NTRS)

    Suomi, V. E.

    1975-01-01

    The complete output of the Synchronous Meteorological Satellite was recorded on one inch magnetic tape. A quality control subsystem tests cloud track vectors against four sets of criteria: (1) rejection if best match occurs on correlation boundary; (2) rejection if major correlation peak is not distinct and significantly greater than secondary peak; (3) rejection if correlation is not persistent; and (4) rejection if acceleration is too great. A cloud height program determines cloud optical thickness from visible data and computer infrared emissivity. From infrared data and temperature profile, cloud height is determined. A functional description and electronic schematics of equipment are given.

  1. A universal computer control system for motors

    NASA Technical Reports Server (NTRS)

    Szakaly, Zoltan F. (Inventor)

    1991-01-01

    A control system for a multi-motor system such as a space telerobot, having a remote computational node and a local computational node interconnected with one another by a high speed data link is described. A Universal Computer Control System (UCCS) for the telerobot is located at each node. Each node is provided with a multibus computer system which is characterized by a plurality of processors with all processors being connected to a common bus, and including at least one command processor. The command processor communicates over the bus with a plurality of joint controller cards. A plurality of direct current torque motors, of the type used in telerobot joints and telerobot hand-held controllers, are connected to the controller cards and responds to digital control signals from the command processor. Essential motor operating parameters are sensed by analog sensing circuits and the sensed analog signals are converted to digital signals for storage at the controller cards where such signals can be read during an address read/write cycle of the command processing processor.

  2. PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM

    NASA Technical Reports Server (NTRS)

    Roberts, F. E.

    1994-01-01

    The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the

  3. Some queuing network models of computer systems

    NASA Technical Reports Server (NTRS)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  4. Advanced Space Surface Systems Operations

    NASA Technical Reports Server (NTRS)

    Huffaker, Zachary Lynn; Mueller, Robert P.

    2014-01-01

    The importance of advanced surface systems is becoming increasingly relevant in the modern age of space technology. Specifically, projects pursued by the Granular Mechanics and Regolith Operations (GMRO) Lab are unparalleled in the field of planetary resourcefulness. This internship opportunity involved projects that support properly utilizing natural resources from other celestial bodies. Beginning with the tele-robotic workstation, mechanical upgrades were necessary to consider for specific portions of the workstation consoles and successfully designed in concept. This would provide more means for innovation and creativity concerning advanced robotic operations. Project RASSOR is a regolith excavator robot whose primary objective is to mine, store, and dump regolith efficiently on other planetary surfaces. Mechanical adjustments were made to improve this robot's functionality, although there were some minor system changes left to perform before the opportunity ended. On the topic of excavator robots, the notes taken by the GMRO staff during the 2013 and 2014 Robotic Mining Competitions were effectively organized and analyzed for logistical purposes. Lessons learned from these annual competitions at Kennedy Space Center are greatly influential to the GMRO engineers and roboticists. Another project that GMRO staff support is Project Morpheus. Support for this project included successfully producing mathematical models of the eroded landing pad surface for the vertical testbed vehicle to predict a timeline for pad reparation. And finally, the last project this opportunity made contribution to was Project Neo, a project exterior to GMRO Lab projects, which focuses on rocket propulsion systems. Additions were successfully installed to the support structure of an original vertical testbed rocket engine, thus making progress towards futuristic test firings in which data will be analyzed by students affiliated with Rocket University. Each project will be explained in

  5. Effectiveness evaluation of STOL transport operations (phase 2). [computer simulation program of commercial short haul aircraft operations

    NASA Technical Reports Server (NTRS)

    Welp, D. W.; Brown, R. A.; Ullman, D. G.; Kuhner, M. B.

    1974-01-01

    A computer simulation program which models a commercial short-haul aircraft operating in the civil air system was developed. The purpose of the program is to evaluate the effect of a given aircraft avionics capability on the ability of the aircraft to perform on-time carrier operations. The program outputs consist primarily of those quantities which can be used to determine direct operating costs. These include: (1) schedule reliability or delays, (2) repairs/replacements, (3) fuel consumption, and (4) cancellations. More comprehensive models of the terminal area environment were added and a simulation of an existing airline operation was conducted to obtain a form of model verification. The capability of the program to provide comparative results (sensitivity analysis) was then demonstrated by modifying the aircraft avionics capability for additional computer simulations.

  6. Modeling Power System Operation with Intermittent Resources

    SciTech Connect

    Marinovici, Maria C.; Kirkham, Harold; Glass, Kevin A.; Carlsen, Leif C.

    2013-02-27

    Electricity generating companies and power system operators face the need to minimize total fuel cost or maximize total profit over a given time period. These issues become optimization problems subject to a large number of constraints that must be satisfied simultaneously. The grid updates due to smart-grid technologies plus the penetration of intermittent re- sources in electrical grid introduce additional complexity to the optimization problem. The Renewable Integration Model (RIM) is a computer model of interconnected power system. It is intended to provide insight and advice on complex power systems management, as well as answers to integration of renewable energy questions. This paper describes RIM basic design concept, solution method, and the initial suite of modules that it supports.

  7. Computer aided detection system for clustered microcalcifications

    PubMed Central

    Ge, Jun; Hadjiiski, Lubomir M.; Sahiner, Berkman; Wei, Jun; Helvie, Mark A.; Zhou, Chuan; Chan, Heang-Ping

    2009-01-01

    We have developed a computer-aided detection (CAD) system to detect clustered microcalcification automatically on full-field digital mammograms (FFDMs) and a CAD system for screen-film mammograms (SFMs). The two systems used the same computer vision algorithms but their false positive (FP) classifiers were trained separately with sample images of each modality. In this study, we compared the performance of the CAD systems for detection of clustered microcalcifications on pairs of FFDM and SFM obtained from the same patient. For case-based performance evaluation, the FFDM CAD system achieved detection sensitivities of 70%, 80%, and 90% at an average FP cluster rate of 0.07, 0.16, and 0.63 per image, compared with an average FP cluster rate of 0.15, 0.38, and 2.02 per image for the SFM CAD system. The difference was statistically significant with the alternative free-response receiver operating characteristic (AFROC) analysis. When evaluated on data sets negative for microcalcification clusters, the average FP cluster rates of the FFDM CAD system were 0.04, 0.11, and 0.33 per image at detection sensitivity level of 70%, 80%, and 90%, compared with an average FP cluster rate of 0.08, 0.14, and 0.50 per image for the SFM CAD system. When evaluated for malignant cases only, the difference of the performance of the two CAD systems was not statistically significant with AFROC analysis. PMID:17264365

  8. CAESY - COMPUTER AIDED ENGINEERING SYSTEM

    NASA Technical Reports Server (NTRS)

    Wette, M. R.

    1994-01-01

    Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from research.att.com. CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  9. Embedded systems for supporting computer accessibility.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol. PMID:26294501

  10. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  11. Checkpoint triggering in a computer system

    DOEpatents

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  12. Test, Control and Monitor System (TCMS) operations plan

    NASA Technical Reports Server (NTRS)

    Macfarlane, C. K.; Conroy, M. P.

    1993-01-01

    The purpose is to provide a clear understanding of the Test, Control and Monitor System (TCMS) operating environment and to describe the method of operations for TCMS. TCMS is a complex and sophisticated checkout system focused on support of the Space Station Freedom Program (SSFP) and related activities. An understanding of the TCMS operating environment is provided and operational responsibilities are defined. NASA and the Payload Ground Operations Contractor (PGOC) will use it as a guide to manage the operation of the TCMS computer systems and associated networks and workstations. All TCMS operational functions are examined. Other plans and detailed operating procedures relating to an individual operational function are referenced within this plan. This plan augments existing Technical Support Management Directives (TSMD's), Standard Practices, and other management documentation which will be followed where applicable.

  13. Non-developmental item computer systems and the malicious software threat

    NASA Technical Reports Server (NTRS)

    Bown, Rodney L.

    1991-01-01

    The following subject areas are covered: a DOD development system - the Army Secure Operating System; non-development commercial computer systems; security, integrity, and assurance of service (SI and A); post delivery SI and A and malicious software; computer system unique attributes; positive feedback to commercial computer systems vendors; and NDI (Non-Development Item) computers and software safety.

  14. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  15. Operation of the Computer Software Management and Information Center (COSMIC)

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The major operational areas of the COSMIC center are described. Quantitative data on the software submittals, program verification, and evaluation are presented. The dissemination activities are summarized. Customer services and marketing activities of the center for the calendar year are described. Those activities devoted to the maintenance and support of selected programs are described. A Customer Information system, the COSMIC Abstract Recording System Project, and the COSMIC Microfiche Project are summarized. Operational cost data are summarized.

  16. Computer auditing of surgical operative reports written in English.

    PubMed

    Lamiell, J M; Wojcik, Z M; Isaacks, J

    1993-01-01

    We developed a script-based scheme for automated auditing of natural language surgical operative reports. Suitable operations (appendectomy and breast biopsy) were selected, then audit criteria and operation scripts conforming with our audit criteria were developed. Our LISP parser was context and expectation sensitive. Parsed sentences were represented by semigraph structures and placed in a textual database to improve efficiency. Sentence ambiguities were resolved by matching the narrative textual database to the script textual database and employing the Uniform Medical Language System (UMLS) Knowledge Sources. All audit criteria questions were successfully answered for typical operative reports by matching parsed audit questions to the textual database. PMID:8130475

  17. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  18. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  19. Prognostic Analysis System and Methods of Operation

    NASA Technical Reports Server (NTRS)

    MacKey, Ryan M. E. (Inventor); Sneddon, Robert (Inventor)

    2014-01-01

    A prognostic analysis system and methods of operating the system are provided. In particular, a prognostic analysis system for the analysis of physical system health applicable to mechanical, electrical, chemical and optical systems and methods of operating the system are described herein.

  20. Reducing power consumption while performing collective operations on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  1. Software Systems for High-performance Quantum Computing

    SciTech Connect

    Humble, Travis S; Britt, Keith A

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  2. Computer aided system engineering for space construction

    NASA Technical Reports Server (NTRS)

    Racheli, Ugo

    1989-01-01

    This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.

  3. Distributed computing system with dual independent communications paths between computers and employing split tokens

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  4. Knowledge-based system for computer security

    SciTech Connect

    Hunteman, W.J.

    1988-01-01

    The rapid expansion of computer security information and technology has provided little support for the security officer to identify and implement the safeguards needed to secure a computing system. The Department of Energy Center for Computer Security is developing a knowledge-based computer security system to provide expert knowledge to the security officer. The system is policy-based and incorporates a comprehensive list of system attack scenarios and safeguards that implement the required policy while defending against the attacks. 10 figs.

  5. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-07-09

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: establishing, for each node, a plurality of logical rings, each ring including a different set of at least one core on that node, each ring including the cores on at least two of the nodes; iteratively for each node: assigning each core of that node to one of the rings established for that node to which the core has not previously been assigned, and performing, for each ring for that node, a global allreduce operation using contribution data for the cores assigned to that ring or any global allreduce results from previous global allreduce operations, yielding current global allreduce results for each core; and performing, for each node, a local allreduce operation using the global allreduce results.

  6. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  7. European Flood Awareness System - now operational

    NASA Astrophysics Data System (ADS)

    Alionte Eklund, Cristina.; Hazlinger, Michal; Sprokkereef, Eric; Garcia Padilla, Mercedes; Garcia, Rafael J.; Thielen, Jutta; Salamon, Peter; Pappenberger, Florian

    2013-04-01

    EFAS Computational centre - European Centre for Medium-Range Weather Forecasts - will be running the forecasts, post-processing and operating the EFAS-Information System platform • EFAS Dissemination centre—Swedish Meteorological and Hydrological Institute, Slovak Hydrometeorological Institute and Rijkswaterstaat Waterdienst (the Netherlands)—analyse the results on a daily basis, assess the situation, and disseminate information to the EFAS partners The European Commission is responsible for contract management. The Joint Research Centre further provides support for EFAS through research and development. Aims of EFAS operational • added value early flood forecasting products to hydrological services • unique overview products of ongoing and forecast floods in Europe more than 3 days in advance • create a European network of operational hydrological services

  8. Activities and operations of the Advanced Computing Research Facility, July-October 1986

    SciTech Connect

    Pieper, G.W.

    1986-01-01

    Research activities and operations of the Advanced Computing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advanced computer systems at the Advanced Computing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

  9. Parallelizing Sylvester-like operations on a distributed memory computer

    SciTech Connect

    Hu, D.Y.; Sorensen, D.C.

    1994-12-31

    Discretization of linear operators arising in applied mathematics often leads to matrices with the following structure: M(x) = (D {circle_times} A + B {circle_times} I{sub n} + V)x, where x {element_of} R{sup mn}, B, D {element_of} R{sup nxn}, A {element_of} R{sup mxm} and V {element_of} R{sup mnxmn}; both D and V are diagonal. For the notational convenience, the authors assume that both A and B are symmetric. All the results through this paper can be easily extended to the cases with general A and B. The linear operator on R{sup mn} defined above can be viewed as a generalization of the Sylvester operator: S(x) = (I{sub m} {circle_times} A + B {circle_times} I{sub n})x. The authors therefore refer to it as a Sylvester-like operator. The schemes discussed in this paper therefore also apply to Sylvester operator. In this paper, the authors present the SIMD scheme for parallelization of the Sylvester-like operator on a distributed memory computer. This scheme is designed to approach the best possible efficiency by avoiding unnecessary communication among processors.

  10. Design and implementation of a UNIX based distributed computing system

    SciTech Connect

    Love, J.S.; Michael, M.W.

    1994-12-31

    We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have to be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.

  11. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    ERIC Educational Resources Information Center

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  12. The computer emergency response team system (CERT-System)

    SciTech Connect

    Schultz, E.E.

    1991-10-11

    This paper describes CERT-System, an international affiliation of computer security response teams. Formed after the WANK and OILZ worms attacked numerous systems connected to the Internet, an operational charter was signed by representatives of 11 response teams. This affiliation`s purpose is to provide a forum for ideas about incident response and computer security, share information, solve common problems, and develop strategies for responding to threats, incidents, etc. The achievements and advantages of participation in CERT-System are presented along with suggested growth areas for this affiliation. The views presented in this paper are the views of one member, and do not necessarily represent the views of others affiliated with CERT-System.

  13. The computer emergency response team system (CERT-System)

    SciTech Connect

    Schultz, E.E.

    1991-10-11

    This paper describes CERT-System, an international affiliation of computer security response teams. Formed after the WANK and OILZ worms attacked numerous systems connected to the Internet, an operational charter was signed by representatives of 11 response teams. This affiliation's purpose is to provide a forum for ideas about incident response and computer security, share information, solve common problems, and develop strategies for responding to threats, incidents, etc. The achievements and advantages of participation in CERT-System are presented along with suggested growth areas for this affiliation. The views presented in this paper are the views of one member, and do not necessarily represent the views of others affiliated with CERT-System.

  14. The computer-aided facial reconstruction system.

    PubMed

    Miyasaka, S; Yoshino, M; Imaizumi, K; Seta, S

    1995-06-30

    A computer imaging system was introduced into the facial reconstruction process. The system, which consists of the image processing unit for skull morphometry and the image editing unit for compositing facial components on the skull images, was an original construction. The image processor generates the framework for building a face onto the digitized skull image. For reconstructing a facial image on the framework, several possible data sets of facial components suitable for the skull morphology are selected from the database by operating our original application software. The most suitable cutout samples of facial components are pasted up over the framework in accordance with the anatomical criteria. The database of facial components consists of 24 contours, 18 eyes, 9 eyebrows, 27 noses, 9 lips and 16 hairstyles. After provisional reconstruction, the facial image is retouched by correcting skin colors and shades with an 'electronic painting device'. The resulting image is a great improvement on images made by the conventional clay and drawing method, both in the operational aspect and in the flexibility of creating multiple versions. The present system facilitates a rather objective and rapid approach and allows us easily to generate a range of possible faces. The computer-aided facial reconstruction will lead to an increase in chances of positive identification in practical cases.

  15. Software For Monitoring VAX Computer Systems

    NASA Technical Reports Server (NTRS)

    Farkas, Les; Don, Ken; Lavery, David; Baron, Amy

    1994-01-01

    VAX Continuous Monitoring System (VAXCMS) computer program developed at NASA Headquarters to aid system managers in monitoring performances of VAX computer systems through generation of graphic images summarizing trends in performance metrics over time. VAXCMS written in DCL and VAX FORTRAN for use with DEC VAX-series computers running VMS 5.1 or later.

  16. Using Expert Systems For Computational Tasks

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Regenie, Victoria A.; Brazee, Marylouise; Brumbaugh, Randal W.

    1990-01-01

    Transformation technique enables inefficient expert systems to run in real time. Paper suggests use of knowledge compiler to transform knowledge base and inference mechanism of expert-system computer program into conventional computer program. Main benefit, faster execution and reduced processing demands. In avionic systems, transformation reduces need for special-purpose computers.

  17. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  18. Operator`s guide to eliminating bias in CEM systems

    SciTech Connect

    Jahnke, J.A.

    1994-11-01

    The inclusion of the t-test for bias in the Acid Rain Regulations, 40 CFR Part 75, signaled a marked improvement in the capability to detect a significant source of measurement error that had previously remained hidden. The capability to detect bias left environmental technicians and instrument operators with the often daunting job of, first, diagnosing the cause of the measurement bias, and, then, taking steps to correct it. This publication is intended to make that job easier. A pull-out chart, entitled Eliminating Bias in CEMS -- A Checklist, provides a comprehensive listing of the monitoring system problems that can cause systematic error. A brief description and potential corrective actions are shown for each problem. Finally, the Checklist directs users to the appropriate pages in the accompanying Operator`s Guide, where fuller descriptions of problems and remedies can be found. The accompanying Operator`s Guide to Eliminating Bias in Monitoring Systems is organized into eight chapters. The problem areas covered are: Probe Location and Stratification (Chapter 2), Extractive Sampling Systems (Chapter 3), In-Situ Gas Monitoring Systems and Opacity Monitors (Chapter 4), Flow Monitors (Chapter 5), Gas Analyzers (Chapter 6), and Data Acquisition and Handling Systems (Chapter 7). Chapter 8, the last chapter in the Operator`s Guide, discusses elements that should be incorporated into ongoing Quality Assurance Programs to detect and prevent the problems that produce systematic error in monitor measurements.

  19. New Human-Computer Interface Concepts for Mission Operations

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey A.; Hoxie, Mary Sue; Gillen, Dave; Parkinson, Christopher; Breed, Julie; Nickens, Stephanie; Baitinger, Mick

    2000-01-01

    The current climate of budget cuts has forced the space mission operations community to reconsider how it does business. Gone are the days of building one-of-kind control centers with teams of controllers working in shifts 24 hours per day, 7 days per week. Increasingly, automation is used to significantly reduce staffing needs. In some cases, missions are moving towards lights-out operations where the ground system is run semi-autonomously. On-call operators are brought in only to resolve anomalies. Some operations concepts also call for smaller operations teams to manage an entire family of spacecraft. In the not too distant future, a skeleton crew of full-time general knowledge operators will oversee the operations of large constellations of small spacecraft, while geographically distributed specialists will be assigned to emergency response teams based on their expertise. As the operations paradigms change, so too must the tools to support the mission operations team's tasks. Tools need to be built not only to automate routine tasks, but also to communicate varying types of information to the part-time, generalist, or on-call operators and specialists more effectively. Thus, the proper design of a system's user-system interface (USI) becomes even more importance than before. Also, because the users will be accessing these systems from various locations (e.g., control center, home, on the road) via different devices with varying display capabilities (e.g., workstations, home PCs, PDAS, pagers) over connections with various bandwidths (e.g., dial-up 56k, wireless 9.6k), the same software must have different USIs to support the different types of users, their equipment, and their environments. In other words, the software must now adapt to the needs of the users! This paper will focus on the needs and the challenges of designing USIs for mission operations. After providing a general discussion of these challenges, the paper will focus on the current efforts of

  20. System optimization of gasdynamic lasers, computer program user's manual

    NASA Technical Reports Server (NTRS)

    Otten, L. J., III; Saunders, R. C., III; Morris, S. J.

    1978-01-01

    The user's manual for a computer program that performs system optimization of gasdynamic lasers is provided. Detailed input/output formats are CDC 7600/6600 computers using a dialect of FORTRAN. Sample input/output data are provided to verify correct program operation along with a program listing.

  1. Satellite freeze forecast system. Operating/troubleshooting manual

    NASA Technical Reports Server (NTRS)

    Martsolf, J. D. (Principal Investigator)

    1983-01-01

    Examples of operational procedures are given to assist users of the satellites freeze forecasting system (SFFS) in logging in on to the computer, executing the programs in the menu, logging off the computer, and setting up the automatic system. Directions are also given for displaying, acquiring, and listing satellite maps; for communicating via terminal and monitor displays; and for what to do when the SFFS doesn't work. Administrative procedures are included.

  2. Advanced Transport Operating System (ATOPS) utility library software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  3. Transient Faults in Computer Systems

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  4. Operation Safety Activities for JEM System and Payload Operation

    NASA Astrophysics Data System (ADS)

    Takada, Satomi; Iwata, Yoshihiro; Kato, Mitsuyasu

    2010-09-01

    The Japanese Experiment Module(JEM), "KIBO", which is a part of the International Space Station(ISS) is the first Japanese manned space experimental facility. JEM system and payloads have made the birth of an era of operation. The JAXA Human Space S&MA(JAXA S&MA) assures safety of JEM module and JAXA payloads not only during assembly phase but also operation phase. During the safety critical operation for JEM system and payloads, JAXA S&MA is on ESR S&MA console to monitor the operation related to safety. Safety check list is made for each safety critical task to identify the useful information such as hazard control, operational constraints and flight rules, and so on. It is a support tool for JAXA S&MA to monitor the operation overall. JAXA S&MA has the responsibility of assessing the safety related updates or changes of operational documents. JAXA S&MA will continue to support the JEM operation as long as the operation is continued.

  5. Electronic Medical Business Operations System

    SciTech Connect

    Cannon, D. T.; Metcalf, J. R.; North, M. P.; Richardson, T. L.; Underwood, S. A.; Shelton, P. M.; Ray, W. B.; Morrell, M. L.; Caldwell, III, D. C.

    2012-04-16

    Electronic Management of medical records has taken a back seat both in private industry and in the government. Record volumes continue to rise every day and management of these paper records is inefficient and very expensive. In 2005, the White House announced support for the development of electronic medical records across the federal government. In 2006, the DOE issued 10 CFR 851 requiring all medical records be electronically available by 2015. The Y-12 National Security Complex is currently investing funds to develop a comprehensive EMR to incorporate the requirements of an occupational health facility which are common across the Nuclear Weapons Complex (NWC). Scheduling, workflow, and data capture from medical surveillance, certification, and qualification examinations are core pieces of the system. The Electronic Medical Business Operations System (EMBOS) will provide a comprehensive health tool solution to 10 CFR 851 for Y-12 and can be leveraged to the Nuclear Weapon Complex (NWC); all site in the NWC must meet the requirements of 10 CFR 851 which states that all medical records must be electronically available by 2015. There is also potential to leverage EMBOS to the private4 sector. EMBOS is being developed and deployed in phases. When fully deployed the EMBOS will be a state-of-the-art web-enabled integrated electronic solution providing a complete electronic medical record (EMR). EMBOS has been deployed and provides a dynamic electronic medical history and surveillance program (e.g., Asbestos, Hearing Conservation, and Respirator Wearer) questionnaire. Table 1 below lists EMBOS capabilities and data to be tracked. Data to be tracked: Patient Demographics – Current/Historical; Physical Examination Data; Employee Medical Health History; Medical Surveillance Programs; Patient and Provider Schedules; Medical Qualification/Certifications; Laboratory Data; Standardized Abnormal Lab Notifications; Prescription Medication Tracking and Dispensing; Allergies

  6. Electronic Medical Business Operations System

    2012-04-16

    Electronic Management of medical records has taken a back seat both in private industry and in the government. Record volumes continue to rise every day and management of these paper records is inefficient and very expensive. In 2005, the White House announced support for the development of electronic medical records across the federal government. In 2006, the DOE issued 10 CFR 851 requiring all medical records be electronically available by 2015. The Y-12 National Securitymore » Complex is currently investing funds to develop a comprehensive EMR to incorporate the requirements of an occupational health facility which are common across the Nuclear Weapons Complex (NWC). Scheduling, workflow, and data capture from medical surveillance, certification, and qualification examinations are core pieces of the system. The Electronic Medical Business Operations System (EMBOS) will provide a comprehensive health tool solution to 10 CFR 851 for Y-12 and can be leveraged to the Nuclear Weapon Complex (NWC); all site in the NWC must meet the requirements of 10 CFR 851 which states that all medical records must be electronically available by 2015. There is also potential to leverage EMBOS to the private4 sector. EMBOS is being developed and deployed in phases. When fully deployed the EMBOS will be a state-of-the-art web-enabled integrated electronic solution providing a complete electronic medical record (EMR). EMBOS has been deployed and provides a dynamic electronic medical history and surveillance program (e.g., Asbestos, Hearing Conservation, and Respirator Wearer) questionnaire. Table 1 below lists EMBOS capabilities and data to be tracked. Data to be tracked: Patient Demographics – Current/Historical; Physical Examination Data; Employee Medical Health History; Medical Surveillance Programs; Patient and Provider Schedules; Medical Qualification/Certifications; Laboratory Data; Standardized Abnormal Lab Notifications; Prescription Medication Tracking and Dispensing

  7. System for Computer Automated Typesetting (SCAT) of Computer Authored Texts.

    ERIC Educational Resources Information Center

    Keeler, F. Laurence

    This description of the System for Automated Typesetting (SCAT), an automated system for typesetting text and inserting special graphic symbols in programmed instructional materials created by the computer aided authoring system AUTHOR, provides an outline of the design architecture of the system and an overview including the component…

  8. Software simulator for multiple computer simulation system

    NASA Technical Reports Server (NTRS)

    Ogrady, E. P.

    1983-01-01

    A description is given of the structure and use of a computer program that simulates the operation of a parallel processor simulation system. The program is part of an investigation to determine algorithms that are suitable for simulating continous systems on a parallel processor configuration. The simulator is designed to accurately simulate the problem-solving phase of a simulation study. Care has been taken to ensure the integrity and correctness of data exchanges and to correctly sequence periods of computation and periods of data exchange. It is pointed out that the functions performed during a problem-setup phase or a reset phase are not simulated. In particular, there is no attempt to simulate the downloading process that loads object code into the local, transfer, and mapping memories of processing elements or the memories of the run control processor and the system control processor. The main program of the simulator carries out some problem-setup functions of the system control processor in that it requests the user to enter values for simulation system parameters and problem parameters. The method by which these values are transferred to the other processors, however, is not simulated.

  9. Organising a University Computer System: Analytical Notes.

    ERIC Educational Resources Information Center

    Jacquot, J. P.; Finance, J. P.

    1990-01-01

    Thirteen trends in university computer system development are identified, system user requirements are analyzed, critical system qualities are outlined, and three options for organizing a computer system are presented. The three systems include a centralized network, local network, and federation of local networks. (MSE)

  10. On the computational implementation of forward and back-projection operations for cone-beam computed tomography.

    PubMed

    Karimi, Davood; Ward, Rabab

    2016-08-01

    Forward- and back-projection operations are the main computational burden in iterative image reconstruction in computed tomography. In addition, their implementation has to be accurate to ensure stable convergence to a high-quality image. This paper reviews and compares some of the variations in the implementation of these operations in cone-beam computed tomography. We compare four algorithms for computing the system matrix, including a distance-driven algorithm, an algorithm based on cubic basis functions, another based on spherically symmetric basis functions, and a voxel-driven algorithm. The focus of our study is on understanding how the choice of the implementation of the system matrix will influence the performance of iterative image reconstruction algorithms, including such factors as the noise strength and spatial resolution in the reconstructed image. Our experiments with simulated and real cone-beam data reveal the significance of the speed-accuracy trade-off in the implementation of the system matrix. Our results suggest that fast convergence of iterative image reconstruction methods requires accurate implementation of forward- and back-projection operations, involving a direct estimation of the convolution of the footprint of the voxel basis function with the surface of the detectors. The required accuracy decreases by increasing the resolution of the projection measurements beyond the resolution of the reconstructed image. Moreover, reconstruction of low-contrast objects needs more accurate implementation of these operations. Our results also show that, compared with regularized reconstruction methods, the behavior of iterative reconstruction algorithms that do not use a proper regularization is influenced more significantly by the implementation of the forward- and back-projection operations.

  11. CARMENES instrument control system and operational scheduler

    NASA Astrophysics Data System (ADS)

    Garcia-Piquer, Alvaro; Guàrdia, Josep; Colomé, Josep; Ribas, Ignasi; Gesa, Lluis; Morales, Juan Carlos; Pérez-Calpena, Ana; Seifert, Walter; Quirrenbach, Andreas; Amado, Pedro J.; Caballero, José A.; Reiners, Ansgar

    2014-07-01

    The main goal of the CARMENES instrument is to perform high-accuracy measurements of stellar radial velocities (1m/s) with long-term stability. CARMENES will be installed in 2015 at the 3.5 m telescope in the Calar Alto Observatory (Spain) and it will be equipped with two spectrographs covering from the visible to the near-infrared. It will make use of its near-IR capabilities to observe late-type stars, whose peak of the spectral energy distribution falls in the relevant wavelength interval. The technology needed to develop this instrument represents a challenge at all levels. We present two software packages that play a key role in the control layer for an efficient operation of the instrument: the Instrument Control System (ICS) and the Operational Scheduler. The coordination and management of CARMENES is handled by the ICS, which is responsible for carrying out the operations of the different subsystems providing a tool to operate the instrument in an integrated manner from low to high user interaction level. The ICS interacts with the following subsystems: the near-IR and visible channels, composed by the detectors and exposure meters; the calibration units; the environment sensors; the front-end electronics; the acquisition and guiding module; the interfaces with telescope and dome; and, finally, the software subsystems for operational scheduling of tasks, data processing, and data archiving. We describe the ICS software design, which implements the CARMENES operational design and is planned to be integrated in the instrument by the end of 2014. The CARMENES operational scheduler is the second key element in the control layer described in this contribution. It is the main actor in the translation of the survey strategy into a detailed schedule for the achievement of the optimization goals. The scheduler is based on Artificial Intelligence techniques and computes the survey planning by combining the static constraints that are known a priori (i.e., target

  12. Reliable timing systems for computer controlled accelerators

    NASA Astrophysics Data System (ADS)

    Knott, Jürgen; Nettleton, Robert

    1986-06-01

    Over the past decade the use of computers has set new standards for control systems of accelerators with ever increasing complexity coupled with stringent reliability criteria. In fact, with very slow cycling machines or storage rings any erratic operation or timing pulse will cause the loss of precious particles and waste hours of time and effort of preparation. Thus, for the CERN linac and LEAR (Low Energy Antiproton Ring) timing system reliability becomes a crucial factor in the sense that all components must operate practically without fault for very long periods compared to the effective machine cycle. This has been achieved by careful selection of components and design well below thermal and electrical limits, using error detection and correction where possible, as well as developing "safe" decoding techniques for serial data trains. Further, consistent structuring had to be applied in order to obtain simple and flexible modular configurations with very few components on critical paths and to minimize the exchange of information to synchronize accelerators. In addition, this structuring allows the development of efficient strategies for on-line and off-line fault diagnostics. As a result, the timing system for Linac 2 has, so far, been operating without fault for three years, the one for LEAR more than one year since its final debugging.

  13. Multiaxis, Lightweight, Computer-Controlled Exercise System

    NASA Technical Reports Server (NTRS)

    Haynes, Leonard; Bachrach, Benjamin; Harvey, William

    2006-01-01

    The multipurpose, multiaxial, isokinetic dynamometer (MMID) is a computer-controlled system of exercise machinery that can serve as a means for quantitatively assessing a subject s muscle coordination, range of motion, strength, and overall physical condition with respect to a wide variety of forces, motions, and exercise regimens. The MMID is easily reconfigurable and compactly stowable and, in comparison with prior computer-controlled exercise systems, it weighs less, costs less, and offers more capabilities. Whereas a typical prior isokinetic exercise machine is limited to operation in only one plane, the MMID can operate along any path. In addition, the MMID is not limited to the isokinetic (constant-speed) mode of operation. The MMID provides for control and/or measurement of position, force, and/or speed of exertion in as many as six degrees of freedom simultaneously; hence, it can accommodate more complex, more nearly natural combinations of motions and, in so doing, offers greater capabilities for physical conditioning and evaluation. The MMID (see figure) includes as many as eight active modules, each of which can be anchored to a floor, wall, ceiling, or other fixed object. A cable is payed out from a reel in each module to a bar or other suitable object that is gripped and manipulated by the subject. The reel is driven by a DC brushless motor or other suitable electric motor via a gear reduction unit. The motor can be made to function as either a driver or an electromagnetic brake, depending on the required nature of the interaction with the subject. The module includes a force and a displacement sensor for real-time monitoring of the tension in and displacement of the cable, respectively. In response to commands from a control computer, the motor can be operated to generate a required tension in the cable, to displace the cable a required distance, or to reel the cable in or out at a required speed. The computer can be programmed, either locally or via

  14. Integrated Computer System of Management in Logistics

    NASA Astrophysics Data System (ADS)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  15. Dynamic Operations Wayfinding System (DOWS) for Nuclear Power Plants

    SciTech Connect

    Boring, Ronald Laurids; Ulrich, Thomas Anthony; Lew, Roger Thomas

    2015-08-01

    A novel software tool is proposed to aid reactor operators in respond- ing to upset plant conditions. The purpose of the Dynamic Operations Wayfind- ing System (DOWS) is to diagnose faults, prioritize those faults, identify paths to resolve those faults, and deconflict the optimal path for the operator to fol- low. The objective of DOWS is to take the guesswork out of the best way to combine procedures to resolve compound faults, mitigate low threshold events, or respond to severe accidents. DOWS represents a uniquely flexible and dy- namic computer-based procedure system for operators.

  16. Buyer's Guide to Computer Based Instructional Systems.

    ERIC Educational Resources Information Center

    Fratini, Robert C.

    1981-01-01

    Examines the advantages and disadvantages of shared multiterminal computer based instruction (CBI) systems, dedicated multiterminal CBI systems, and stand-alone CBI systems. A series of questions guide consumers in matching a system's capabilities with an organization's needs. (MER)

  17. ANL computer controlled target storage system: Status report

    SciTech Connect

    Klimczak, G.W.; Nardi, B.G.; Travis, D.J.

    1986-01-01

    Design and operation of an isotopic target storage system is described. Due to the cost and effort associated with nuclear target production, it is necessary to protect them. The storage system described was designed to protect up to 90 hydroscopic and readily oxidizing targets under vacuum of 10/sup -6/ torr. The computer controller maintains system integrity during normal use and emergency situations. (JDH)

  18. Computational biology approach to uncover hepatitis C virus helicase operation.

    PubMed

    Flechsig, Holger

    2014-04-01

    Hepatitis C virus (HCV) helicase is a molecular motor that splits nucleic acid duplex structures during viral replication, therefore representing a promising target for antiviral treatment. Hence, a detailed understanding of the mechanism by which it operates would facilitate the development of efficient drug-assisted therapies aiming to inhibit helicase activity. Despite extensive investigations performed in the past, a thorough understanding of the activity of this important protein was lacking since the underlying internal conformational motions could not be resolved. Here we review investigations that have been previously performed by us for HCV helicase. Using methods of structure-based computational modelling it became possible to follow entire operation cycles of this motor protein in structurally resolved simulations and uncover the mechanism by which it moves along the nucleic acid and accomplishes strand separation. We also discuss observations from that study in the light of recent experimental studies that confirm our findings.

  19. Computational biology approach to uncover hepatitis C virus helicase operation

    PubMed Central

    Flechsig, Holger

    2014-01-01

    Hepatitis C virus (HCV) helicase is a molecular motor that splits nucleic acid duplex structures during viral replication, therefore representing a promising target for antiviral treatment. Hence, a detailed understanding of the mechanism by which it operates would facilitate the development of efficient drug-assisted therapies aiming to inhibit helicase activity. Despite extensive investigations performed in the past, a thorough understanding of the activity of this important protein was lacking since the underlying internal conformational motions could not be resolved. Here we review investigations that have been previously performed by us for HCV helicase. Using methods of structure-based computational modelling it became possible to follow entire operation cycles of this motor protein in structurally resolved simulations and uncover the mechanism by which it moves along the nucleic acid and accomplishes strand separation. We also discuss observations from that study in the light of recent experimental studies that confirm our findings. PMID:24707123

  20. Autonomous Operations System: Development and Application

    NASA Technical Reports Server (NTRS)

    Toro Medina, Jaime A.; Wilkins, Kim N.; Walker, Mark; Stahl, Gerald M.

    2016-01-01

    Autonomous control systems provides the ability of self-governance beyond the conventional control system. As the complexity of mechanical and electrical systems increases, there develops a natural drive for developing robust control systems to manage complicated operations. By closing the bridge between conventional automated systems to knowledge based self-awareness systems, nominal control of operations can evolve into relying on safe critical mitigation processes to support any off-nominal behavior. Current research and development efforts lead by the Autonomous Propellant Loading (APL) group at NASA Kennedy Space Center aims to improve cryogenic propellant transfer operations by developing an automated control and health monitoring system. As an integrated systems, the center aims to produce an Autonomous Operations System (AOS) capable of integrating health management operations with automated control to produce a fully autonomous system.

  1. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  2. Electrochemical cell operation and system

    DOEpatents

    Maru, Hansraj C.

    1980-03-11

    Thermal control in fuel cell operation is affected through sensible heat of process gas by providing common input manifolding of the cell gas flow passage in communication with the cell electrolyte and an additional gas flow passage which is isolated from the cell electrolyte and in thermal communication with a heat-generating surface of the cell. Flow level in the cell gas flow passage is selected based on desired output electrical energy and flow level in the additional gas flow passage is selected in accordance with desired cell operating temperature.

  3. Laptop Computer - Based Facial Recognition System Assessment

    SciTech Connect

    R. A. Cain; G. B. Singleton

    2001-03-01

    The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results. After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in

  4. OEPSS operationally efficient propulsion system study

    NASA Technical Reports Server (NTRS)

    1993-01-01

    A final report on the Operationally Efficient Propulsion System Study (OEPSS) is presented. A review of Launch Site Operations, OEPSS objectives, operations support structure, OEPSS Concerns List, and scope of OEPSS are summarized, along with goals of OEPSS technologies, and operations technology levels. Air-augmented ejector/rocket, flash boiling tank pressurization technology, and advanced LH2 turbopump are described. Launch facilities, operations-driven propulsion system architecture, integrated booster propulsion module, turbopump operating conditions, and payload capability using integrated engine elements are addressed among other topics.

  5. Anaesthetists' role in computer keyboard contamination in an operating room.

    PubMed

    Fukada, T; Iwakiri, H; Ozaki, M

    2008-10-01

    To store anaesthetic records in computers, anaesthetists usually input data while still wearing dirty wet gloves. No studies have explored computer contamination in the operating room (OR) or anaesthetists' awareness of the importance of handwashing or hand hygiene. We investigated four components of keyboard contamination: (1) degree of contamination, (2) effect of cleaning with ethyl alcohol, (3) bacterial transmission between gloves and keyboards by tapping keys, and (4) frequency of anaesthetists' performing hand hygiene. Most of the bacteria on keyboards were coagulase-negative staphylococci and Bacillus spp.; however, meticillin-resistant Staphylococcus aureus was also found. Cleaning keyboards with ethyl alcohol effectively reduced bacterial counts. Wet contaminated gloves and keyboards transmitted meticillin-susceptible Staphylococcus epidermidis from one to the other more readily than dry contaminated gloves and keyboards. Only 17% of anaesthetists performed hand hygiene before anaesthesia, although 64% or 69% of anaesthetists performed hand hygiene after anaesthesia or before lunch. To prevent cross-contamination, keyboards should be routinely cleaned according to the manufacturer's instructions and disinfected once daily, or, when visibly soiled with blood or secretions. Moreover, anaesthetists should be aware that they could spread microbes that might cause healthcare-associated infection in the OR. Anaesthetists should perform hand hygiene before and after anaesthesia and remove gloves after each procedure and before using the computer. PMID:18701192

  6. Spatial Operator Algebra for multibody system dynamics

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Jain, A.; Kreutz-Delgado, K.

    1992-01-01

    The Spatial Operator Algebra framework for the dynamics of general multibody systems is described. The use of a spatial operator-based methodology permits the formulation of the dynamical equations of motion of multibody systems in a concise and systematic way. The dynamical equations of progressively more complex grid multibody systems are developed in an evolutionary manner beginning with a serial chain system, followed by a tree topology system and finally, systems with arbitrary closed loops. Operator factorizations and identities are used to develop novel recursive algorithms for the forward dynamics of systems with closed loops. Extensions required to deal with flexible elements are also discussed.

  7. Computerized Operator Support System – Phase II Development

    SciTech Connect

    Ulrich, Thomas A.; Boring, Ronald L.; Lew, Roger T.; Thomas, Kenneth D.

    2015-02-01

    A computerized operator support system (COSS) prototype for nuclear control room process control is proposed and discussed. The COSS aids operators in addressing rapid plant upsets that would otherwise result in the shutdown of the power plant and interrupt electrical power generation, representing significant costs to the owning utility. In its current stage of development the prototype demonstrates four advanced functions operators can use to more efficiently monitor and control the plant. These advanced functions consist of: (1) a synthesized and intuitive high level overview display of system components and interrelations, (2) an enthalpy-based mathematical chemical and volume control system (CVCS) model to detect and diagnose component failures, (3) recommended strategies to mitigate component failure effects and return the plant back to pre-fault status, and (4) computer-based procedures to walk the operator through the recommended mitigation actions. The COSS was demonstrated to a group of operators and their feedback was collected. The operators responded positively to the COSS capabilities and features and indicated the system would be an effective operator aid. The operators also suggested several additional features and capabilities for the next iteration of development. Future versions of the COSS prototype will include additional plant systems, flexible computer-based procedure presentation formats, and support for simultaneous component fault diagnosis and dual fault synergistic mitigation action strategies to more efficiently arrest any plant upsets.

  8. Science Orders Systems and Operations Manual.

    ERIC Educational Resources Information Center

    Kriz, Harry M.

    This manual describes the implementation and operation of SCIENCE ORDERS, an online orders management system used by the Science and Technology Department of Newman Library at Virginia Polytechnic Institute and State University. Operational since January 1985, the system is implemented using the SPIRES database management system and is used to (1)…

  9. The engineering design integration (EDIN) system. [digital computer program complex

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  10. Protected quantum computing: interleaving gate operations with dynamical decoupling sequences.

    PubMed

    Zhang, Jingfu; Souza, Alexandre M; Brandao, Frederico Dias; Suter, Dieter

    2014-02-01

    Implementing precise operations on quantum systems is one of the biggest challenges for building quantum devices in a noisy environment. Dynamical decoupling attenuates the destructive effect of the environmental noise, but so far, it has been used primarily in the context of quantum memories. Here, we experimentally demonstrate a general scheme for combining dynamical decoupling with quantum logical gate operations using the example of an electron-spin qubit of a single nitrogen-vacancy center in diamond. We achieve process fidelities >98% for gate times that are 2 orders of magnitude longer than the unprotected dephasing time T2.

  11. Determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation

    DOEpatents

    Blocksome, Michael A.

    2011-12-20

    Methods, apparatus, and products are disclosed for determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation that includes, for each compute node in the set: initializing a barrier counter with no counter underflow interrupt; configuring, upon entering the barrier operation, the barrier counter with a value in dependence upon a number of compute nodes in the set; broadcasting, by a DMA engine on the compute node to each of the other compute nodes upon entering the barrier operation, a barrier control packet; receiving, by the DMA engine from each of the other compute nodes, a barrier control packet; modifying, by the DMA engine, the value for the barrier counter in dependence upon each of the received barrier control packets; exiting the barrier operation if the value for the barrier counter matches the exit value.

  12. Evaluation of computer-based ultrasonic inservice inspection systems

    SciTech Connect

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T.

    1994-03-01

    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

  13. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  14. REAL TIME SYSTEM OPERATIONS 2006-2007

    SciTech Connect

    Eto, Joseph H.; Parashar, Manu; Lewis, Nancy Jo

    2008-08-15

    The Real Time System Operations (RTSO) 2006-2007 project focused on two parallel technical tasks: (1) Real-Time Applications of Phasors for Monitoring, Alarming and Control; and (2) Real-Time Voltage Security Assessment (RTVSA) Prototype Tool. The overall goal of the phasor applications project was to accelerate adoption and foster greater use of new, more accurate, time-synchronized phasor measurements by conducting research and prototyping applications on California ISO's phasor platform - Real-Time Dynamics Monitoring System (RTDMS) -- that provide previously unavailable information on the dynamic stability of the grid. Feasibility assessment studies were conducted on potential application of this technology for small-signal stability monitoring, validating/improving existing stability nomograms, conducting frequency response analysis, and obtaining real-time sensitivity information on key metrics to assess grid stress. Based on study findings, prototype applications for real-time visualization and alarming, small-signal stability monitoring, measurement based sensitivity analysis and frequency response assessment were developed, factory- and field-tested at the California ISO and at BPA. The goal of the RTVSA project was to provide California ISO with a prototype voltage security assessment tool that runs in real time within California ISO?s new reliability and congestion management system. CERTS conducted a technical assessment of appropriate algorithms, developed a prototype incorporating state-of-art algorithms (such as the continuation power flow, direct method, boundary orbiting method, and hyperplanes) into a framework most suitable for an operations environment. Based on study findings, a functional specification was prepared, which the California ISO has since used to procure a production-quality tool that is now a part of a suite of advanced computational tools that is used by California ISO for reliability and congestion management.

  15. Synthetic vision systems: operational considerations simulation experiment

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  16. Computational Metabolomics Operations at BioCyc.org.

    PubMed

    Karp, Peter D; Billington, Richard; Holland, Timothy A; Kothari, Anamika; Krummenacker, Markus; Weaver, Daniel; Latendresse, Mario; Paley, Suzanne

    2015-05-22

    BioCyc.org is a genome and metabolic pathway web portal covering 5500 organisms, including Homo sapiens, Arabidopsis thaliana, Saccharomyces cerevisiae and Escherichia coli. These organism-specific databases have undergone variable degrees of curation. The EcoCyc (Escherichia coli Encyclopedia) database is the most highly curated; its contents have been derived from 27,000 publications. The MetaCyc (Metabolic Encyclopedia) database within BioCyc is a "universal" metabolic database that describes pathways, reactions, enzymes and metabolites from all domains of life. Metabolic pathways provide an organizing framework for analyzing metabolomics data, and the BioCyc website provides computational operations for metabolomics data that include metabolite search and translation of metabolite identifiers across multiple metabolite databases. The site allows researchers to store and manipulate metabolite lists using a facility called SmartTables, which supports metabolite enrichment analysis. That analysis operation identifies metabolite sets that are statistically over-represented for the substrates of specific metabolic pathways. BioCyc also enables visualization of metabolomics data on individual pathway diagrams and on the organism-specific metabolic map diagrams that are available for every BioCyc organism. Most of these operations are available both interactively and as programmatic web services.

  17. Computational Metabolomics Operations at BioCyc.org

    PubMed Central

    Karp, Peter D.; Billington, Richard; Holland, Timothy A.; Kothari, Anamika; Krummenacker, Markus; Weaver, Daniel; Latendresse, Mario; Paley, Suzanne

    2015-01-01

    BioCyc.org is a genome and metabolic pathway web portal covering 5500 organisms, including Homo sapiens, Arabidopsis thaliana, Saccharomyces cerevisiae and Escherichia coli. These organism-specific databases have undergone variable degrees of curation. The EcoCyc (Escherichia coli Encyclopedia) database is the most highly curated; its contents have been derived from 27,000 publications. The MetaCyc (Metabolic Encyclopedia) database within BioCyc is a “universal” metabolic database that describes pathways, reactions, enzymes and metabolites from all domains of life. Metabolic pathways provide an organizing framework for analyzing metabolomics data, and the BioCyc website provides computational operations for metabolomics data that include metabolite search and translation of metabolite identifiers across multiple metabolite databases. The site allows researchers to store and manipulate metabolite lists using a facility called SmartTables, which supports metabolite enrichment analysis. That analysis operation identifies metabolite sets that are statistically over-represented for the substrates of specific metabolic pathways. BioCyc also enables visualization of metabolomics data on individual pathway diagrams and on the organism-specific metabolic map diagrams that are available for every BioCyc organism. Most of these operations are available both interactively and as programmatic web services. PMID:26011592

  18. Safety Metrics for Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  19. A Framework for Enterprise Operating Systems Based on Zachman Framework

    NASA Astrophysics Data System (ADS)

    Ostadzadeh, S. Shervin; Rahmani, Amir Masoud

    Nowadays, the Operating System (OS) isn't only the software that runs your computer. In the typical information-driven organization, the operating system is part of a much larger platform for applications and data that extends across the LAN, WAN and Internet. An OS cannot be an island unto itself; it must work with the rest of the enterprise. Enterprise wide applications require an Enterprise Operating System (EOS). Enterprise operating systems used in an enterprise have brought about an inevitable tendency to lunge towards organizing their information activities in a comprehensive way. In this respect, Enterprise Architecture (EA) has proven to be the leading option for development and maintenance of enterprise operating systems. EA clearly provides a thorough outline of the whole information system comprising an enterprise. To establish such an outline, a logical framework needs to be laid upon the entire information system. Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have prominent roles in enterprise-wide system development. In this paper, we propose a framework based on ZF for enterprise operating systems. The presented framework helps developers to design and justify completely integrated business, IT systems, and operating systems which results in improved project success rate.

  20. Flintshire County Library; Computer Cataloguing System.

    ERIC Educational Resources Information Center

    Davies, Glyn, Ed.

    Use of computer techniques is being more generally applied within public libraries but details of purpose, development and operation are of interest to those on the brink of change. A recent application of computer techniques to a bilingual library book-stock is briefly outlined. The catalog records were computerized by section, and a bilingual…

  1. A manual accountability system designed to reduce operator error

    SciTech Connect

    Abramczyk, M R

    1989-01-01

    At the Savannah River Plant, the separations areas are not equipped with automated accountability systems, therefore accountability is performed manually. Several years ago, the Computer Systems Engineering group was requested to develop a computerized accountability system for the separations areas that would rely on manual entry and perform the necessary computations, adjust and maintain the books, and generate the necessary reports. In addition, the system would provide a complete audit trail and help reduce operator errors. Since the separations areas are actually divided into several material balance areas, the Computer Systems Engineering group was faced with several detailed specifications. Rather than designing a computerized accountability system for each material balance area, they designed a generic system that each area could tailor to its process. The system helps in reducing operator errors by displaying simple data entry forms, performing data validations when possible, providing field help, performing all computations, and generating the necessary reports. Many validation tables are user configurable, as well as the equations for computing transfer and inventory values. 8 figs.

  2. Teaching Environmental Systems Modelling Using Computer Simulation.

    ERIC Educational Resources Information Center

    Moffatt, Ian

    1986-01-01

    A computer modeling course in environmental systems and dynamics is presented. The course teaches senior undergraduates to analyze a system of interest, construct a system flow chart, and write computer programs to simulate real world environmental processes. An example is presented along with a course evaluation, figures, tables, and references.…

  3. Computer Programs For Automated Welding System

    NASA Technical Reports Server (NTRS)

    Agapakis, John E.

    1993-01-01

    Computer programs developed for use in controlling automated welding system described in MFS-28578. Together with control computer, computer input and output devices and control sensors and actuators, provide flexible capability for planning and implementation of schemes for automated welding of specific workpieces. Developed according to macro- and task-level programming schemes, which increases productivity and consistency by reducing amount of "teaching" of system by technician. System provides for three-dimensional mathematical modeling of workpieces, work cells, robots, and positioners.

  4. Specification of Computer Systems by Objectives.

    ERIC Educational Resources Information Center

    Eltoft, Douglas

    1989-01-01

    Discusses the evolution of mainframe and personal computers, and presents a case study of a network developed at the University of Iowa called the Iowa Computer-Aided Engineering Network (ICAEN) that combines Macintosh personal computers with Apollo workstations. Functional objectives are stressed as the best measure of system performance. (LRW)

  5. Software for computer-aided receiver operating characteristic (ROC) analysis

    NASA Astrophysics Data System (ADS)

    Engel, John R.; Craine, Eric R.

    1994-04-01

    We are currently developing an easy-to-use, microcomputer-based software application to help researchers perform ROC studies. The software will have facilities for aiding the researcher in all phases of an ROC study, including experiment design, setting up and conducting test sessions, analyzing results and generating reports. The initial version of the software, named 'ROC Assistant', operates on Macintosh computers and enables the user to enter a case list, run test sessions and produce an ROC curve. We are in the process of developing enhanced versions which will incorporate functions for statistical analysis, experimental design and online help. In this paper we discuss the ROC methodology upon which the software is based as well as our software development effort to date.

  6. Highlights of the GURI hydroelectric plant computer control system

    SciTech Connect

    Dal Monte, R.; Banakar, H.; Hoffman, R.; Lebeau, M.; Schroeder, R.

    1988-07-01

    The GURI power plant on the Caroni river in Venezuela has 20 generating units with a total capacity of 10,000 MW, the largest currently operating in the world. The GURI Computer Control System (GCS) provides for comprehensive operation management of the entire power plant and the adjacent switchyards. This article describes some highlights of the functions of the state-of-the-art system. The topics considered include the operating modes of the remote terminal units (RTUs), automatic start/stop of generating units, RTU closed-loop control, automatic generation and voltage control, unit commitment, operator training stimulator, and maintenance management.

  7. ALLY: An operator's associate for satellite ground control systems

    NASA Technical Reports Server (NTRS)

    Bushman, J. B.; Mitchell, Christine M.; Jones, P. M.; Rubin, K. S.

    1991-01-01

    The key characteristics of an intelligent advisory system is explored. A central feature is that human-machine cooperation should be based on a metaphor of human-to-human cooperation. ALLY, a computer-based operator's associate which is based on a preliminary theory of human-to-human cooperation, is discussed. ALLY assists the operator in carrying out the supervisory control functions for a simulated NASA ground control system. Experimental evaluation of ALLY indicates that operators using ALLY performed at least as well as they did when using a human associate and in some cases even better.

  8. Representation of feedback operators for hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1995-01-01

    We consider the problem of obtaining integral representation of feedback operators for damped hyperbolic control systems. We show that for the wave equation with Kelvin-Voigt damping and non-compact input operator, the feedback gain operator is Hilbert-Schmidt. This result is then used to provide an explicit integral representation for the feedback operator in terms of functional gains. Numerical results are given to illustrate the role that damping plays in the smoothness of these gains.

  9. Overreaction to External Attacks on Computer Systems Could Be More Harmful than the Viruses Themselves.

    ERIC Educational Resources Information Center

    King, Kenneth M.

    1988-01-01

    Discussion of the recent computer virus attacks on computers with vulnerable operating systems focuses on the values of educational computer networks. The need for computer security procedures is emphasized, and the ethical use of computer hardware and software is discussed. (LRW)

  10. An information management and communications system for emergency operations

    SciTech Connect

    Gladden, C.A.; Doyle, J.F.

    1995-09-01

    In the mid 1980s the US Department of Energy (DOE) recognized the need to dramatically expand its Emergency Operations Centers to deal with the large variety of emergencies for which DOE has an obligation to manage, or provide technical support. This paper describes information management, display, and communications systems that have been implemented at the DOE Headquarters Forestall Operations Center (OC), DOE Operations Offices, and critical laboratory locations. Major elements of the system at the DOE Headquarters facility include computer control, information storage and retrieval, processing, Local Area Networks (LANs), videoconferencing, video display, and audio systems. These Headquarters systems are linked by Wide Area Networks (WANs) to similar systems at the Operations Office and critical Laboratory locations.

  11. Recursive flexible multibody system dynamics using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1992-01-01

    This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.

  12. Technology development for remote, computer-assisted operation of a continuous mining machine

    SciTech Connect

    Schnakenberg, G.H.

    1993-12-31

    The U.S. Bureau of Mines was created to conduct research to improve the health, safety, and efficiency of the coal and metal mining industries. In 1986, the Bureau embarked on a new, major research effort to develop the technology that would enable the relocation of workers from hazardous areas to areas of relative safety. This effort is in contrast to historical efforts by the Bureau of controlling or reducing the hazardous agent or providing protection to the worker. The technologies associated with automation, robotics, and computer software and hardware systems had progressed to the point that their use to develop computer-assisted operation of mobile mining equipment appeared to be a cost-effective and accomplishable task. At the first International Symposium of Mine Mechanization and Automation, an overview of the Bureau`s computer-assisted mining program for underground coal mining was presented. The elements included providing computer-assisted tele-remote operation of continuous mining machines, haulage systems and roof bolting machines. Areas of research included sensors for machine guidance and for coal interface detection. Additionally, the research included computer hardware and software architectures which are extremely important in developing technology that is transferable to industry and is flexible enough to accommodate the variety of machines used in coal mining today. This paper provides an update of the research under the computer-assisted mining program.

  13. Self-pacing direct memory access data transfer operations for compute nodes in a parallel computer

    SciTech Connect

    Blocksome, Michael A

    2015-02-17

    Methods, apparatus, and products are disclosed for self-pacing DMA data transfer operations for nodes in a parallel computer that include: transferring, by an origin DMA on an origin node, a RTS message to a target node, the RTS message specifying an message on the origin node for transfer to the target node; receiving, in an origin injection FIFO for the origin DMA from a target DMA on the target node in response to transferring the RTS message, a target RGET descriptor followed by a DMA transfer operation descriptor, the DMA descriptor for transmitting a message portion to the target node, the target RGET descriptor specifying an origin RGET descriptor on the origin node that specifies an additional DMA descriptor for transmitting an additional message portion to the target node; processing, by the origin DMA, the target RGET descriptor; and processing, by the origin DMA, the DMA transfer operation descriptor.

  14. SD-CAS: Spin Dynamics by Computer Algebra System.

    PubMed

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples.

  15. SD-CAS: Spin Dynamics by Computer Algebra System

    NASA Astrophysics Data System (ADS)

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples.

  16. D0 Cryogenic System Operator Training

    SciTech Connect

    Markley, D.; /Fermilab

    1991-11-30

    D0 is a collider detector. It will be operating and doing physics at the same time as CDP, therefore it has been decided to train CDP operators to operate and respond to the D0 cryogenic control system. A cryogenic operator will be required to be in residence at D0, during the cooldown and liquid Argon fill of any of the calorimeters. The cryogenic system at D0 is designed to be unmanned during steady state operation. CDP operations has 2 man cryogenic shifts 24 hours a day. It is intended that CDP operators monitor the D0 cryogenic systems, evaluate and respond to alarms, and notify a D0 cryo expert in the event of an unusual problem. A D0 cryogenic system view node has been installed at CDP to help facilitate these goals. It should be noted that even though the CDP view node is a fully operational node it is intended that it be more of an information node and is therefore password protected. The D0 cryo experts may reassess the use of the CDP node at a later date based on experience and operating needs. This engineering note outlines the format of the training and testing given to the CDP operators to make them qualified D0 operators.

  17. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  18. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  19. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  20. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  1. 47 CFR 101.1009 - System operations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false System operations. 101.1009 Section 101.1009 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Local Multipoint Distribution Service § 101.1009 System operations. (a) The licensee may...

  2. 47 CFR 32.2220 - Operator systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Operator systems. 32.2220 Section 32.2220 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2220 Operator...

  3. 47 CFR 32.2220 - Operator systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Operator systems. 32.2220 Section 32.2220 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2220 Operator...

  4. Deepen the Teaching Reform of Operating System, Cultivate the Comprehensive Quality of Students

    ERIC Educational Resources Information Center

    Liu, Jianjun

    2010-01-01

    Operating system is the core course of the specialty of computer science and technology. To understand and master the operating system will directly affect students' further study on other courses. The course of operating system focuses more on theories. Its contents are more abstract and the knowledge system is more complicated. Therefore,…

  5. VOCATIONAL EDUCATION INFORMATION SYSTEM. STATE OPERATING MANUAL, VOLUME 2.

    ERIC Educational Resources Information Center

    Federal Electric Corp., Paramus, NJ.

    THIS DOCUMENT SUPPLEMENTS REPORT AA 000 157, A STATE-LEVEL OPERATING MANUAL FOR THE NATIONWIDE VOCATIONAL EDUCATION INFORMATION SYSTEM (VEIS). IT CONTAINS ALL DOCUMENTATION GENERATED FOR A PILOT DEMONSTRATION OF VEIS IN CALIFORNIA, INCLUDING DATA COLLECTIONS FORMS AND INSTRUCTIONS, FUNCTIONAL AND TECHNICAL FLOW CHARTS, COMPUTER PROGRAMS, AND…

  6. Fault tolerant computing: A preamble for assuring viability of large computer systems

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1977-01-01

    The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.

  7. Small Aircraft Transportation System, Higher Volume Operations Concept: Normal Operations

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.; Jones, Kenneth M.; Consiglio, Maria C.; Williams, Daniel M.; Adams, Catherine A.

    2004-01-01

    This document defines the Small Aircraft Transportation System (SATS), Higher Volume Operations (HVO) concept for normal conditions. In this concept, a block of airspace would be established around designated non-towered, non-radar airports during periods of poor weather. Within this new airspace, pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft. Using onboard equipment and procedures, they would then approach and land at the airport. Departures would be handled in a similar fashion. The details for this operational concept are provided in this document.

  8. Resource requirements for digital computations on electrooptical systems.

    PubMed

    Eshaghian, M M; Panda, D K; Kumar, V K

    1991-03-10

    In this paper we study the resource requirements of electrooptical organizations in performing digital computing tasks. We define a generic model of parallel computation using optical interconnects, called the optical model of computation (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Using this model we derive relationships between information transfer and computational resources in solving a given problem. To illustrate our results, we concentrate on a computationally intensive operation, 2-D digital image convolution. Irrespective of the input/output scheme and the order of computation, we show a lower bound of ?(nw) on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  9. B190 computer controlled radiation monitoring and safety interlock system

    SciTech Connect

    Espinosa, D L; Fields, W F; Gittins, D E; Roberts, M L

    1998-08-01

    The Center for Accelerator Mass Spectrometry (CAMS) in the Earth and Environmental Sciences Directorate at Lawrence Livermore National Laboratory (LLNL) operates two accelerators and is in the process of installing two new additional accelerators in support of a variety of basic and applied measurement programs. To monitor the radiation environment in the facility in which these accelerators are located and to terminate accelerator operations if predetermined radiation levels are exceeded, an updated computer controlled radiation monitoring system has been installed. This new system also monitors various machine safety interlocks and again terminates accelerator operations if machine interlocks are broken. This new system replaces an older system that was originally installed in 1988. This paper describes the updated B190 computer controlled radiation monitoring and safety interlock system.

  10. Vision system for telerobotics operation

    NASA Astrophysics Data System (ADS)

    Wong, Andrew K. C.; Li, Li-Wei; Liu, Wei-Cheng

    1992-10-01

    This paper presents a knowledge-based vision system for a telerobotics guidance project. The system is capable of recognizing and locating 3-D objects with unrestricted viewpoints in a simulated unconstrained space environment. It constructs object representation for vision tasks from wireframe models; recognizes and locates objects in a 3-D scene, and provides world modeling capability to establish, maintain, and update 3-D environment description for telerobotic manipulations. In this paper, an object model is represented by an attributed hypergraph which contains direct structural (relational) information with features grouped according to their multiple-views so as the interpretation of the 3-D object and its 2-D projections are coupled. With this representation, object recognition is directed by a knowledge-directed hypothesis refinement strategy. The strategy starts with the identification of 2-D local feature characteristics for initiating feature and relation matching. Next it continues to refine the matching by adding 2-D features from the image according to viewpoint and geometric consistency. Finally it links the successful matchings back to the 3-D model to recover the feature, relation and location information of the recognized object. The paper also presents the implementation and the experimentation of the vision prototype.

  11. A Framework for Adaptable Operating and Runtime Systems

    SciTech Connect

    Sterling, Thomas

    2014-03-04

    The emergence of new classes of HPC systems where performance improvement is enabled by Moore’s Law for technology is manifest through multi-core-based architectures including specialized GPU structures. Operating systems were originally designed for control of uniprocessor systems. By the 1980s multiprogramming, virtual memory, and network interconnection were integral services incorporated as part of most modern computers. HPC operating systems were primarily derivatives of the Unix model with Linux dominating the Top-500 list. The use of Linux for commodity clusters was first pioneered by the NASA Beowulf Project. However, the rapid increase in number of cores to achieve performance gain through technology advances has exposed the limitations of POSIX general-purpose operating systems in scaling and efficiency. This project was undertaken through the leadership of Sandia National Laboratories and in partnership of the University of New Mexico to investigate the alternative of composable lightweight kernels on scalable HPC architectures to achieve superior performance for a wide range of applications. The use of composable operating systems is intended to provide a minimalist set of services specifically required by a given application to preclude overheads and operational uncertainties (“OS noise”) that have been demonstrated to degrade efficiency and operational consistency. This project was undertaken as an exploration to investigate possible strategies and methods for composable lightweight kernel operating systems towards support for extreme scale systems.

  12. Using the Computer in Systems Engineering Design

    ERIC Educational Resources Information Center

    Schmidt, W.

    1970-01-01

    With the aid of the programmed computer, the systems designer can analyze systems for which certain components have not yet been manufactured or even invented, and the power of solution-technique is greatly increased. (IR)

  13. Interactive graphical computer-aided design system

    NASA Technical Reports Server (NTRS)

    Edge, T. M.

    1975-01-01

    System is used for design, layout, and modification of large-scale-integrated (LSI) metal-oxide semiconductor (MOS) arrays. System is structured around small computer which provides real-time support for graphics storage display unit with keyboard, slave display unit, hard copy unit, and graphics tablet for designer/computer interface.

  14. Computer controlled thermal fatigue test system

    SciTech Connect

    Schmale, D.T.; Jones, W.B.

    1986-01-01

    A servo-controlled hydraulic mechanical test system has been configured to conduct computer-controlled thermal fatigue tests. The system uses induction heating, a digital temperature controller, infrared pyrometry, forced air cooling, and quartz rod extensometry. In addition, a digital computer controls the tests and allows precise data analysis and interpretation.

  15. Computer Literacy in a Distance Education System

    ERIC Educational Resources Information Center

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  16. Mammographic computer-aided detection systems.

    PubMed

    2003-04-01

    While mammography is regarded as the best means available to screen for breast cancer, reading mammograms is a tedious, error-prone task. Given the repetitiveness of the process and the fact that less than 1% of mammograms in the average screening population contain cancer, it's no wonder that a significant number of breast cancers--about 28%--are missed by radiologists. The fact that human error is such a significant obstacle makes mammography screening an ideal application for computer-aided detection (CAD) systems. CAD systems serve as a "second pair of eyes" to ensure that radiologists don't miss a suspect area on an image. They analyze patterns on a digitized mammographic image, identify regions that may contain an abnormality indicating cancer, and mark these regions. The marks are then inspected and classified by a radiologist. But CAD systems provide no diagnosis of any kind--it's up to the radiologist to analyze the marked area and decide if it shows cancer. In this Evaluation, we describe the challenges posed by screening mammography, the operating principles and overall efficacy of CAD systems, and the characteristics to consider when purchasing a system. We also compare the performance of two commercially available systems, iCAD's MammoReader and R2's ImageChecker. Because the two systems offer comparable sensitivity, our judgments are based on other performance characteristics, including their ease of use, the number of false marks they produce, the degree to which they can integrate with hospital information systems, and their processing speed.

  17. Information and computer-aided system for structural materials

    NASA Astrophysics Data System (ADS)

    Nekrashevitch, Ju. G.; Nizametdinov, Sh. U.; Polkovnikov, A. V.; Rumjantzev, V. P.; Surina, O. N.; Kalinin, G. M.; Sidorenkov, A. V.; Strebkov, Ju. S.

    1992-09-01

    An information and computer-aided system for structural materials data has been developed to provide data for the fusion and fission reactor system design. It is designed for designers, industrial engineers, and material science specialists and provides a friendly interface in an interactive mode. The database for structural materials contains the master files: chemical composition, physical, mechanical, corrosion, technological properties, regulatory and technical documentation. The system is implemented on a PC/AT running the PS/2 operating system.

  18. Achieving Operability via the Mission System Paradigm

    NASA Technical Reports Server (NTRS)

    Hammer, Fred J.; Kahr, Joseph R.

    2006-01-01

    In the past, flight and ground systems have been developed largely-independently, with the flight system taking the lead, and dominating the development process. Operability issues have been addressed poorly in planning, requirements, design, I&T, and system-contracting activities. In many cases, as documented in lessons-learned, this has resulted in significant avoidable increases in cost and risk. With complex missions and systems, operability is being recognized as an important end-to-end design issue. Never-the-less, lessons-learned and operability concepts remain, in many cases, poorly understood and sporadically applied. A key to effective application of operability concepts is adopting a 'mission system' paradigm. In this paradigm, flight and ground systems are treated, from an engineering and management perspective, as inter-related elements of a larger mission system. The mission system consists of flight hardware, flight software, telecom services, ground data system, testbeds, flight teams, science teams, flight operations processes, procedures, and facilities. The system is designed in functional layers, which span flight and ground. It is designed in response to project-level requirements, mission design and an operations concept, and is developed incrementally, with early and frequent integration of flight and ground components.

  19. NSLS beam line data acquisition and analysis computer system

    SciTech Connect

    Feng-Berman, S.K.; Siddons, D.P.; Berman, L.

    1993-11-01

    A versatile computer environment to manage instrumentation alignment and experimental control at NSLS beam lines has been developed. The system is based on a 386/486 personal computer running under a UNIX operating system with X11 Windows. It offers an ideal combination of capability, flexibility, compatibility, and cost. With a single personal computer, the beam line user can run a wide range of scattering and spectroscopy experiments using a multi-tasking data collection program which can interact with CAMAC, GPIB and AT-Bus interfaces, and simultaneously examine and analyze data and communicate with remote network nodes.

  20. The structure of the clouds distributed operating system

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data

  1. Surface Operations Systems Improve Airport Efficiency

    NASA Technical Reports Server (NTRS)

    2009-01-01

    With Small Business Innovation Research (SBIR) contracts from Ames Research Center, Mosaic ATM of Leesburg, Virginia created software to analyze surface operations at airports. Surface surveillance systems, which report locations every second for thousands of air and ground vehicles, generate massive amounts of data, making gathering and analyzing this information difficult. Mosaic?s Surface Operations Data Analysis and Adaptation (SODAA) tool is an off-line support tool that can analyze how well the airport surface operation is working and can help redesign procedures to improve operations. SODAA helps researchers pinpoint trends and correlations in vast amounts of recorded airport operations data.

  2. PC-based automation system streamlines operations

    SciTech Connect

    Bowman, J.

    1995-10-01

    The continued emergence of PC-based automation systems in the modern compressor station is driving the need for personnel who have the special skills need to support them. However, the dilemma is that operating budget restraints limit the overall number of people available to operate and maintain compressor stations. An ideal solution is to deploy automation systems which can be easily understood and supported by existing compressor station personnel. This paper reviews such a system developed by Waukesha-Pearce Industries, Inc.

  3. NASA Customer Data and Operations System

    NASA Technical Reports Server (NTRS)

    Butler, Madeline J.; Stallings, William H.

    1991-01-01

    In addition to the currently provided NASA services such as Communications and Tracking and Data Relay Satellite System services, the NASA's Customer Data and Operations System (CDOS) will provide the following services to the user: Data Delivery Service, Data Archive Service, and CDOS Operations Management Service. This paper describes these services in detail and presents respective block diagrams. The CDOS services will support a variety of multipurpose missions simultaneously with centralized and common hardware and software data-driven systems.

  4. Performing a global barrier operation in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

  5. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung Fung

    1988-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  6. Software-based geometry operations for 3D computer graphics

    NASA Astrophysics Data System (ADS)

    Sima, Mihai; Iancu, Daniel; Glossner, John; Schulte, Michael; Mamidi, Suman

    2006-02-01

    In order to support a broad dynamic range and a high degree of precision, many of 3D renderings fundamental algorithms have been traditionally performed in floating-point. However, fixed-point data representation is preferable over floating-point representation in graphics applications on embedded devices where performance is of paramount importance, while the dynamic range and precision requirements are limited due to the small display sizes (current PDA's are 640 × 480 (VGA), while cell-phones are even smaller). In this paper we analyze the efficiency of a CORDIC-augmented Sandbridge processor when implementing a vertex processor in software using fixed-point arithmetic. A CORDIC-based solution for vertex processing exhibits a number of advantages over classical Multiply-and-Acumulate solutions. First, since a single primitive is used to describe the computation, the code can easily be vectorized and multithreaded, and thus fits the major Sandbridge architectural features. Second, since a CORDIC iteration consists of only a shift operation followed by an addition, the computation may be deeply pipelined. Initially, we outline the Sandbridge architecture extension which encompasses a CORDIC functional unit and the associated instructions. Then, we consider rigid-body rotation, lighting, exponentiation, vector normalization, and perspective division (which are some of the most important data-intensive 3D graphics kernels) and propose a scheme to implement them on the CORDIC-augmented Sandbridge processor. Preliminary results indicate that the performance improvement within the extended instruction set ranges from 3× to 10× (with the exception of rigid body rotation).

  7. Study of operational parameters impacting helicopter fuel consumption. [using computer techniques (computer programs)

    NASA Technical Reports Server (NTRS)

    Cross, J. L.; Stevens, D. D.

    1976-01-01

    A computerized study of operational parameters affecting helicopter fuel consumption was conducted as an integral part of the NASA Civil Helicopter Technology Program. The study utilized the Helicopter Sizing and Performance Computer Program (HESCOMP) developed by the Boeing-Vertol Company and NASA Ames Research Center. An introduction to HESCOMP is incorporated in this report. The results presented were calculated using the NASA CH-53 civil helicopter research aircraft specifications. Plots from which optimum flight conditions for minimum fuel use that can be obtained are presented for this aircraft. The results of the study are considered to be generally indicative of trends for all helicopters.

  8. Computer Bits: The Ideal Computer System for Your Center.

    ERIC Educational Resources Information Center

    Brown, Dennis; Neugebauer, Roger

    1986-01-01

    Reviews five computer systems that can address the needs of a child care center: (1) Sperry PC IT with Bernoulli Box, (2) Compaq DeskPro 286, (3) Macintosh Plus, (4) Epson Equity II, and (5) Leading Edge Model "D." (HOD)

  9. Study of the modifications needed for efficient operation of NASTRAN on the Control Data Corporation STAR-100 computer

    NASA Technical Reports Server (NTRS)

    1975-01-01

    NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.

  10. Efficient O(N) recursive computation of the operational space inertial matrix

    SciTech Connect

    Lilly, K.W.; Orin, D.E.

    1993-09-01

    The operational space inertia matrix {Lambda} reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix {Lambda} also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute {Lambda} has a computational complexity of O(N{sup 3}) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing {Lambda} for N {>=} 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base.

  11. Method for concurrent execution of primitive operations by dynamically assigning operations based upon computational marked graph and availability of data

    NASA Technical Reports Server (NTRS)

    Stoughton, John W. (Inventor); Mielke, Roland V. (Inventor)

    1990-01-01

    Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.

  12. Operation of staged membrane oxidation reactor systems

    DOEpatents

    Repasky, John Michael

    2012-10-16

    A method of operating a multi-stage ion transport membrane oxidation system. The method comprises providing a multi-stage ion transport membrane oxidation system with at least a first membrane oxidation stage and a second membrane oxidation stage, operating the ion transport membrane oxidation system at operating conditions including a characteristic temperature of the first membrane oxidation stage and a characteristic temperature of the second membrane oxidation stage; and controlling the production capacity and/or the product quality by changing the characteristic temperature of the first membrane oxidation stage and/or changing the characteristic temperature of the second membrane oxidation stage.

  13. VLT Data Flow System Begins Operation

    NASA Astrophysics Data System (ADS)

    1999-06-01

    their proposed observations and provide accurate estimates of the amount of telescope time they will need to complete their particular scientific programme. Once the proposals have been reviewed by the OPC and telescope time is awarded by the ESO management according to the recommendation by this Committee, the successful astronomers begin to assemble detailed descriptions of their intended observations (e.g. position in the sky, time and duration of the observation, the instrument mode, etc.) in the form of computer files called Observation Blocks (OBs) . The software to make OBs is distributed by ESO and used by the astronomers at their home institutions to design their observing programs well before the observations are scheduled at the telescope. The OBs can then be directly executed by the VLT and result in an increased efficiency in the collection of raw data (images, spectra) from the science instruments on the VLT. The activation (execution) of OBs can be done by the astronomer at the telescope on a particular set of dates ( visitor mode operation) or it can be done by ESO science operations astronomers at times which are optimally suited for the particular scientific programme ( service mode operation). An enormous VLT Data Archive ESO PR Photo 25b/99 ESO PR Photo 25b/99 [Preview - JPEG: 400 x 465 pix - 160k] [Normal - JPEG: 800 x 929 pix - 568k] [High-Res - JPEG: 3000 x 3483 pix - 5.5M] Caption to ESO PR Photo 25b/99 : The first of several DVD storage robot at the VLT Data Archive at the ESO headquarters include 1100 DVDs (with a total capacity of about 16 Terabytes) that may be rapidly accessed by the archive software system, ensuring fast availbility of the requested data. The raw data generated at the telescope are stored by an archive system that sends these data regularly back to ESO headquarters in Garching (Germany) in the form of CD and DVD ROM disks. While the well-known Compact Disks (CD ROMs) store about 600 Megabytes (600,000,000 bytes) each, the

  14. Architectural requirements for the Red Storm computing system.

    SciTech Connect

    Camp, William J.; Tomkins, James Lee

    2003-10-01

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  15. MTA Computer Based Evaluation System.

    ERIC Educational Resources Information Center

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  16. Computing an operating parameter of a unified power flow controller

    DOEpatents

    Wilson, David G; Robinett, III, Rush D

    2015-01-06

    A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.

  17. Personal computer applications in DIII-D neutral beam operation

    SciTech Connect

    Glad, A.S.

    1986-08-01

    An IBM PC AT has been implemented to improve operation of the DIII-D neutral beams. The PC system provides centralization of all beam data with reasonable access for on-line shot-to-shot control and analysis. The PC hardware was configured to interface all four neutral beam host minicomputers, support multitasking, and provide storage for approximately one month's accumulation of beam data. The PC software is composed of commercial packages used for performance and statistical analysis (i.e., LOTUS 123, PC PLOT, etc.), host communications software (i.e., PCLink, KERMIT, etc.), and applications developed software utilizing f-smcapso-smcapsr-smcapst-smcapsr-smcapsa-smcapsn-smcaps and b-smcapsa-smcapss-smcapsIc-smcaps. The objectives of this paper are to describe the implementation of the PC system, the methods of integrating the various software packages, and the scenario for on-line control and analysis.

  18. Three computer codes to read, plot and tabulate operational test-site recorded solar data

    NASA Technical Reports Server (NTRS)

    Stewart, S. D.; Sampson, R. S., Jr.; Stonemetz, R. E.; Rouse, S. L.

    1980-01-01

    Computer programs used to process data that will be used in the evaluation of collector efficiency and solar system performance are described. The program, TAPFIL, reads data from an IBM 360 tape containing information (insolation, flowrates, temperatures, etc.) from 48 operational solar heating and cooling test sites. Two other programs, CHPLOT and WRTCNL, plot and tabulate the data from the direct access, unformatted TAPFIL file. The methodology of the programs, their inputs, and their outputs are described.

  19. Computer-aided monitoring and operation of continuous measuring devices.

    PubMed

    Rieger, L; Thomann, M; Joss, A; Gujer, W; Siegrist, H

    2004-01-01

    Extended studies of measuring and control systems in activated sludge plants at EAWAG revealed that the measuring devices remain the weakest point in control applications. To overcome this problem, a software package was developed which analyses and evaluates the residuals between a reference measurement and the sensor and collects the information in a database. The underlying monitoring concept is based on a two-step evaluation of the residuals by means of statistical evaluations using control charts with two different sets of criteria. The first step is a warning phase in which hints on probable errors trigger an increase in the monitoring frequency. In the second step, the alarm phase, the error hypothesis has to be validated and should allow immediate and targeted reactions from the operator. This procedure enables an optimized and flexible monitoring effort combined with an increased probability of early detection of systematic measuring errors. Beside the monitoring concept, information about the measuring device, the performed servicing actions and the responsibilities is stored. Statistical values for the quantitative characterization of the measuring system during operation will be given. They are needed to parameterise controllers or to guarantee the accuracy of the instrument in order to allow reliable calculations of effluent tax. In contrast to other concepts, not only is the measuring device examined under standard conditions, but so is the entire measuring chain from the liquid to be analysed to the value stored in the database of the supervisory system. The knowledge of the response time of the measuring system is then required in order to allow a comparison of the corresponding values.

  20. Computer-Based Medical System

    NASA Technical Reports Server (NTRS)

    1998-01-01

    SYMED, Inc., developed a unique electronic medical records and information management system. The S2000 Medical Interactive Care System (MICS) incorporates both a comprehensive and interactive medical care support capability and an extensive array of digital medical reference materials in either text or high resolution graphic form. The system was designed, in cooperation with NASA, to improve the effectiveness and efficiency of physician practices. The S2000 is a MS (Microsoft) Windows based software product which combines electronic forms, medical documents, records management, and features a comprehensive medical information system for medical diagnostic support and treatment. SYMED, Inc. offers access to its medical systems to all companies seeking competitive advantages.

  1. System security in the space flight operations center

    NASA Technical Reports Server (NTRS)

    Wagner, David A.

    1988-01-01

    The Space Flight Operations Center is a networked system of workstation-class computers that will provide ground support for NASA's next generation of deep-space missions. The author recounts the development of the SFOC system security policy and discusses the various management and technology issues involved. Particular attention is given to risk assessment, security plan development, security implications of design requirements, automatic safeguards, and procedural safeguards.

  2. Operation System for Automatization of an Experiment in Radioastronomy

    NASA Astrophysics Data System (ADS)

    Bogdanov, V. V.

    The problem-oriented operation system ER (for the minicomputer "Electronica-100 I") is intended for use in the low level of the unit "acquisition" of the automatic complex for radio observations at the radio telescope RATAN-600. The main functions of this system are the following: conduction of the dialogue User - a computer, realization of the multitask regime, providing multiprogramming, control of data input/output.

  3. Establishing performance requirements of computer based systems subject to uncertainty

    SciTech Connect

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  4. Public Address Systems. Specifications - Installation - Operation.

    ERIC Educational Resources Information Center

    Palmer, Fred M.

    Provisions for public address in new construction of campus buildings (specifications, installations, and operation of public address systems), are discussed in non-technical terms. Consideration is given to microphones, amplifiers, loudspeakers and the placement and operation of various different combinations. (FS)

  5. Nuclear Materials Identification System Operational Manual

    SciTech Connect

    Chiang, L.G.

    2001-04-10

    This report describes the operation and setup of the Nuclear Materials Identification System (NMIS) with a {sup 252}Cf neutron source at the Oak Ridge Y-12 Plant. The components of the system are described with a description of the setup of the system along with an overview of the NMIS measurements for scanning, calibration, and confirmation of inventory items.

  6. Utilization of Computer Technology in the Third World: An Evaluation of Computer Operations at the University of Honduras.

    ERIC Educational Resources Information Center

    Shermis, Mark D.

    This report of the results of an evaluation of computer operations at the University of Honduras (Universidad Nacional Autonoma de Honduras) begins by discussing the problem--i.e., poor utilization of the campus mainframe computer--and listing the hardware and software available in the computer center. Data collection methods are summarized,…

  7. DOE-MACSYMA. Computer Algebra System

    SciTech Connect

    Harten, L.

    1989-08-01

    DOE-MACSYMA (Project MAC`s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions.

  8. A computational system for a Mars rover

    NASA Technical Reports Server (NTRS)

    Lambert, Kenneth E.

    1989-01-01

    This paper presents an overview of an onboard computing system that can be used for meeting the computational needs of a Mars rover. The paper begins by presenting an overview of some of the requirements which are key factors affecting the architecture. The rest of the paper describes the architecture. Particular emphasis is placed on the criteria used in defining the system and how the system qualitatively meets the criteria.

  9. DOE-MACSYMA. Computer Algebra System

    SciTech Connect

    Cook, G.

    1987-10-01

    DOE-MACSYMA (Project MAC`s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions.

  10. DOE-MACSYMA. Computer Algebra System

    SciTech Connect

    Cook, G.

    1985-03-01

    DOE-MACSYMA (Project MAC`s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions.

  11. A System for Monitoring and Management of Computational Grids

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Biegel, Bryan (Technical Monitor)

    2002-01-01

    As organizations begin to deploy large computational grids, it has become apparent that systems for observation and control of the resources, services, and applications that make up such grids are needed. Administrators must observe the operation of resources and services to ensure that they are operating correctly and they must control the resources and services to ensure that their operation meets the needs of users. Users are also interested in the operation of resources and services so that they can choose the most appropriate ones to use. In this paper we describe a prototype system to monitor and manage computational grids and describe the general software framework for control and observation in distributed environments that it is based on.

  12. Noise and sonic-boom impact technology. PCBOOM computer program for sonic-boom research. Volume 2. Program users/computer operations manual. Final report, May 1987-October 1988

    SciTech Connect

    Salvetti, A.; Seidman, H.

    1988-10-01

    This report contains the information for both the user and for computer operations. The report provides the user with the information necessary to effectively use PCBOOM. In addition, it provides the computer operations personnel with a description of the computer system and its associated environment. Two other reports provide a technical discussion of the algorithms used and a program maintenance manual.

  13. Design and Implementation of Instructional Computer Systems.

    ERIC Educational Resources Information Center

    Graczyk, Sandra L.

    1989-01-01

    Presents an input-process-output (IPO) model that can facilitate the design and implementation of instructional micro and minicomputer systems in school districts. A national survey of school districts with outstanding computer systems is described, a systems approach to develop the model is explained, and evaluation of the system is discussed.…

  14. Intelligent computational systems for space applications

    NASA Technical Reports Server (NTRS)

    Lum, Henry, Jr.; Lau, Sonie

    1989-01-01

    The evolution of intelligent computation systems is discussed starting with the Spaceborne VHSIC Multiprocessor System (SVMS). The SVMS is a six-processor system designed to provide at least a 100-fold increase in both numeric and symbolic processing over the i386 uniprocessor. The significant system performance parameters necessary to achieve the performance increase are discussed.

  15. Operational reliability of standby safety systems

    SciTech Connect

    Grant, G.M.; Atwood, C.L.; Gentillon, C.D.

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) is evaluating the operational reliability of several risk-significant standby safety systems based on the operating experience at US commercial nuclear power plants from 1987 through 1993. The reliability assessed is the probability that the system will perform its Probabilistic Risk Assessment (PRA) defined safety function. The quantitative estimates of system reliability are expected to be useful in risk-based regulation. This paper is an overview of the analysis methods and the results of the high pressure coolant injection (HPCI) system reliability study. Key characteristics include (1) descriptions of the data collection and analysis methods, (2) the statistical methods employed to estimate operational unreliability, (3) a description of how the operational unreliability estimates were compared with typical PRA results, both overall and for each dominant failure mode, and (4) a summary of results of the study.

  16. PCOS - An operating system for modular applications

    NASA Technical Reports Server (NTRS)

    Tharp, V. P.

    1986-01-01

    This paper is an introduction to the PCOS operating system for the MC68000 family processors. Topics covered are: development history; development support; rational for development of PCOS and salient characteristics; architecture; and a brief comparison of PCOS to UNIX.

  17. Caustic addition system operability test procedure

    SciTech Connect

    Parazin, R.E.

    1994-11-01

    This test procedure provides instructions for performing operational testing of the major components of the 241-AN-107 Caustic Addition System by WHC and Kaiser personnel at the Rotating Equipment Shop run-in pit (Bldg. 272E).

  18. Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.

  19. Computable measure of total quantum correlations of multipartite systems

    NASA Astrophysics Data System (ADS)

    Behdani, Javad; Akhtarshenas, Seyed Javad; Sarbishaei, Mohsen

    2016-04-01

    Quantum discord as a measure of the quantum correlations cannot be easily computed for most of density operators. In this paper, we present a measure of the total quantum correlations that is operationally simple and can be computed effectively for an arbitrary mixed state of a multipartite system. The measure is based on the coherence vector of the party whose quantumness is investigated as well as the correlation matrix of this part with the remainder of the system. Being able to detect the quantumness of multipartite systems, such as detecting the quantum critical points in spin chains, alongside with the computability characteristic of the measure, makes it a useful indicator to be exploited in the cases which are out of the scope of the other known measures.

  20. A micro-computer based system to compute magnetic variation

    NASA Technical Reports Server (NTRS)

    Kaul, R.

    1984-01-01

    A mathematical model of magnetic variation in the continental United States (COT48) was implemented in the Ohio University LORAN C receiver. The model is based on a least squares fit of a polynomial function. The implementation on the microprocessor based LORAN C receiver is possible with the help of a math chip, Am9511 which performs 32 bit floating point mathematical operations. A Peripheral Interface Adapter (M6520) is used to communicate between the 6502 based micro-computer and the 9511 math chip. The implementation provides magnetic variation data to the pilot as a function of latitude and longitude. The model and the real time implementation in the receiver are described.

  1. Optimizing Synchronization Operations for Remote Memory Communication Systems

    SciTech Connect

    Buntinas, Darius; Saify, Amina; Panda, Dhabaleswar K.; Nieplocha, Jarek; Bob Werner

    2003-04-22

    Synchronization operations, such as fence and locking, are used in many parallel operations accessing shared memory. However, a process which is blocked waiting for a fence operation to complete, or for a lock to be acquired, cannot perform useful computation. It is therefore critical that these operations be implemented as efficiently as possible to reduce the time a process waits idle. These operations also impact the scalability of the overall system. As system sizes get larger, the number of processes potentially requesting a lock increases. In this paper we describe the design and implementation of an optimized operation which combines a global fence operation and a barrier synchronization operation. We also describe our implementation of an optimized lock algorithm. The optimizations have been incorporated into the ARMCI communication library. The global fence and barrier operation gives a factor of improvement of up to 9 over the current implementation in a 16 node system, while the optimized lock implementation gives up to 1.25 factor of improvement. These optimizations allow for more efficient and scalable applications

  2. Analyzing the security of an existing computer system

    NASA Technical Reports Server (NTRS)

    Bishop, M.

    1986-01-01

    Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.

  3. Computer simulation of breathing systems for divers

    SciTech Connect

    Sexton, P.G.; Nuckols, M.L.

    1983-02-01

    A powerful new tool for the analysis and design of underwater breathing gas systems is being developed. A versatile computer simulator is described which makes possible the modular ''construction'' of any conceivable breathing gas system from computer memory-resident components. The analysis of a typical breathing gas system is demonstrated using this simulation technique, and the effects of system modifications on performance of the breathing system are shown. This modeling technique will ultimately serve as the foundation for a proposed breathing system simulator under development by the Navy. The marriage of this computer modeling technique with an interactive graphics system will provide the designer with an efficient, cost-effective tool for the development of new and improved diving systems.

  4. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  5. Rule-based approach to operating system selection: RMS vs. UNIX

    SciTech Connect

    Phifer, M.S.; Sadlowe, A.R.; Emrich, M.L.; Gadagkar, H.P.

    1988-10-01

    A rule-based system is under development for choosing computer operating systems. Following a brief historical account, this paper compares and contrasts the essential features of two operating systems highlighting particular applications. ATandT's UNIX System and Datapoint Corporations's Resource Management System (RMS) are used as illustrative examples. 11 refs., 3 figs.

  6. Adaptive critic design for computer intrusion detection system

    NASA Astrophysics Data System (ADS)

    Novokhodko, Alexander; Wunsch, Donald C., II; Dagli, Cihan H.

    2001-03-01

    This paper summarizes ongoing research. A neural network is used to detect a computer system intrusion basing on data from the system audit trail generated by Solaris Basic Security Module. The data have been provided by Lincoln Labs, MIT. The system alerts the human operator, when it encounters suspicious activity logged in the audit trail. To reduce the false alarm rate and accommodate the temporal indefiniteness of moment of attack a reinforcement learning approach is chosen to train the network.

  7. Intelligent Command and Control Systems for Satellite Ground Operations

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1999-01-01

    This grant, Intelligent Command and Control Systems for Satellite Ground Operations, funded by NASA Goddard Space Flight Center, has spanned almost a decade. During this time, it has supported a broad range of research addressing the changing needs of NASA operations. It is important to note that many of NASA's evolving needs, for example, use of automation to drastically reduce (e.g., 70%) operations costs, are similar requirements in both government and private sectors. Initially the research addressed the appropriate use of emerging and inexpensive computational technologies, such as X Windows, graphics, and color, together with COTS (commercial-off-the-shelf) hardware and software such as standard Unix workstations to re-engineer satellite operations centers. The first phase of research supported by this grant explored the development of principled design methodologies to make effective use of emerging and inexpensive technologies. The ultimate performance measures for new designs were whether or not they increased system effectiveness while decreasing costs. GT-MOCA (The Georgia Tech Mission Operations Cooperative Associate) and GT-VITA (Georgia Tech Visual and Inspectable Tutor and Assistant), whose latter stages were supported by this research, explored model-based design of collaborative operations teams and the design of intelligent tutoring systems, respectively. Implemented in proof-of-concept form for satellite operations, empirical evaluations of both, using satellite operators for the former and personnel involved in satellite control operations for the latter, demonstrated unequivocally the feasibility and effectiveness of the proposed modeling and design strategy underlying both research efforts. The proof-of-concept implementation of GT-MOCA showed that the methodology could specify software requirements that enabled a human-computer operations team to perform without any significant performance differences from the standard two-person satellite

  8. Computer Programmed Milling Machine Operations. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Leonard, Dennis

    This learning module for a high school metals and manufacturing course is designed to introduce the concept of computer-assisted machining (CAM). Through it, students learn how to set up and put data into the controller to machine a part. They also become familiar with computer-aided manufacturing and learn the advantages of computer numerical…

  9. Industrial information database service by personal computer network 'Saitamaken Industrial Information System'

    NASA Astrophysics Data System (ADS)

    Sugahara, Keiji

    Saitamaken Industrial Information System provides onlined database services, which does not rely on computers for the whole operation, but utilizes computers, optical disk files or facsimiles for certain operations as we think fit. It employes the method of providing information for various, outputs, that is, image information is sent from optical disk files to facsimiles, or other information is provided from computers to terminals as well as facsimiles. Locating computers as a core in the system, it enables integrated operations. The system at terminal side was developed separately with functions such as operation by turnkey style, down-loading of statistical information and the newest menu.

  10. The transportation operations system: A description

    SciTech Connect

    Best, R.E.; Danese, F.L.; Dixon, L.D.; Peterson, R.W. ); Pope, R.B. )

    1990-01-01

    This paper presents a description of the system for transporting radioactive waste that may be deployed to accomplish the assigned system mission, which includes accepting spent nuclear fuel (SNF) and high-level radioactive waste (HLW) from waste generator sites and transporting them to the FWMS destination facilities. The system description presented here contains, in part, irradiated fuel and waste casks, ancillary equipments, truck, rail, and barge transporters, cask and vehicle traffic management organizations, maintenance facilities, and other operations elements. The description is for a fully implemented system, which is not expected to be achieved, however, until several years after initial operations. 6 figs.

  11. Reduced Operator Approximation for Modelling Open Quantum Systems

    NASA Astrophysics Data System (ADS)

    Werpachowska, A.

    2015-06-01

    We present the reduced operator approximation: a simple, physically transparent and computationally efficient method of modelling open quantum systems. It employs the Heisenberg picture of the quantum dynamics, which allows us to focus on the system degrees of freedom in a natural and easy way. We describe different variants of the method, low- and high-order in the system-bath interaction operators, defining them for either general quantum harmonic oscillator baths or specialising them for independent baths with Lorentzian spectral densities. Its wide applicability is demonstrated on the examples of systems coupled to different baths (with varying system-bath interaction strength and bath memory length), and compared with the exact pseudomode and the popular quantum state diffusion approach. The method captures the decoherence of the system interacting with the bath, while conserving the total energy. Our results suggest that quantum coherence effects persist in open quantum systems for much longer times than previously thought.

  12. Chandrasekhar equations and computational algorithms for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Ito, K.; Powers, R. K.

    1984-01-01

    The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.

  13. Investing in Computer Technology: Criteria and Procedures for System Selection.

    ERIC Educational Resources Information Center

    Hofstetter, Fred T.

    The criteria used by the University of Delaware in selecting the PLATO computer-based educational system are discussed in this document. Consideration was given to support for instructional strategies, requirements of the student learning station, features for instructors and authors of instructional materials, general operational characteristics,…

  14. Computer program determines chemical composition of physical system at equilibrium

    NASA Technical Reports Server (NTRS)

    Kwong, S. S.

    1966-01-01

    FORTRAN 4 digital computer program calculates equilibrium composition of complex, multiphase chemical systems. This is a free energy minimization method with solution of the problem reduced to mathematical operations, without concern for the chemistry involved. Also certain thermodynamic properties are determined as byproducts of the main calculations.

  15. Adaptation and optimization of basic operations for an unstructured mesh CFD algorithm for computation on massively parallel accelerators

    NASA Astrophysics Data System (ADS)

    Bogdanov, P. B.; Gorobets, A. V.; Sukov, S. A.

    2013-08-01

    The design of efficient algorithms for large-scale gas dynamics computations with hybrid (heterogeneous) computing systems whose high performance relies on massively parallel accelerators is addressed. A high-order accurate finite volume algorithm with polynomial reconstruction on unstructured hybrid meshes is used to compute compressible gas flows in domains of complex geometry. The basic operations of the algorithm are implemented in detail for massively parallel accelerators, including AMD and NVIDIA graphics processing units (GPUs). Major optimization approaches and a computation transfer technique are covered. The underlying programming tool is the Open Computing Language (OpenCL) standard, which performs on accelerators of various architectures, both existing and emerging.

  16. GRTS operations monitor/control system

    NASA Technical Reports Server (NTRS)

    Rohrer, Richard A.

    1994-01-01

    An Operations Monitor/Control System (OMCS) was developed to support remote ground station equipment. The ground station controls a Tracking Data Relay Satellite (TDRS) relocated to provide coverage in the tracking system's zone of exclusion. The relocated satellite significantly improved data recovery for the Gamma Ray Observatory mission. The OMCS implementation, performed in less than 11 months, was mission critical to TDRS drift operations. Extensive use of Commercial Off The Shelf (COTS) hardware and software products contributed to implementation success. The OMCS has been operational for over 9 months with no significant problems. This paper will share our experiences in OMCS development and integration.

  17. Life Lab Computer Support System's Manual.

    ERIC Educational Resources Information Center

    Lippman, Beatrice D.; Walfish, Stephen

    Step-by-step procedures for utilizing the computer support system of Miami-Dade Community College's Life Lab program are described for the following categories: (1) Registration--Student's Lists and Labels, including three separate computer programs for current listings, next semester listings, and grade listings; (2) Competence and Resource…

  18. A System for Cataloging Computer Software

    ERIC Educational Resources Information Center

    Pearson, Karl M., Jr.

    1973-01-01

    As a form of nonbook material, computer software can be cataloged and the collection managed by a library. The System Development Corporation (SDC) Technical Information Center has adapted the Anglo-American Cataloging Rules for descriptive cataloging of computer programs. (11 references) (Author/SJ)

  19. A Handheld Computer System for Classroom Observations.

    ERIC Educational Resources Information Center

    Saudargas, Richard A.; Bunn, R. D.

    1989-01-01

    A handheld computer observation system was developed using Hewlett-Packard HP71B computers for recording and IBM-PCs for storing and analyzing data. Algorithims used in observing interactions between handicapped children and peers or teachers are described. Also described are behavior definitions, observer training and observation procedures, the…

  20. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A design tradeoff study is reported for a modular spaceborne computer system that is responsive to many mission types and phases. The computer uses redundancy to maximize reliability, and multiprocessing to maximize processing capacity. Fault detection and recovery features provide optimal reliability.

  1. Experimental analysis of computer system dependability

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar, K.; Tang, Dong

    1993-01-01

    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.

  2. Computer Sciences and Data Systems, volume 1

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  3. SNS Target Systems initial operating experience

    NASA Astrophysics Data System (ADS)

    McManamy, T.; Forester, J.

    2009-02-01

    The SNS mercury target started operation with low beam power when commissioned on April 28, 2006. The beam power has been following a planned ramp up since then and has reached 340 kW as of February 2008. The target systems supporting neutron production include the target and mercury loop, the cryogenic and ambient moderator systems, reflector and vessel systems, bulk shielding and shutters systems, utility systems, remote handling systems and the associated instrumentation and controls. Availability for these systems has improved with time and reached 100% for the first 2000 hour neutron production run in fiscal year 2008. An overview of the operating experience and the planning to support continued power increases to 1.4 MW for these systems will be given in this paper.

  4. Method and Apparatus Providing Deception and/or Altered Operation in an Information System Operating System

    DOEpatents

    Cohen, Fred; Rogers, Deanna T.; Neagoe, Vicentiu

    2008-10-14

    A method and/or system and/or apparatus providing deception and/or execution alteration in an information system. In specific embodiments, deceptions and/or protections are provided by intercepting and/or modifying operation of one or more system calls of an operating system.

  5. Computational representation of biological systems

    SciTech Connect

    Frazier, Zach; McDermott, Jason E.; Guerquin, Michal; Samudrala, Ram

    2009-04-20

    Integration of large and diverse biological data sets is a daunting problem facing systems biology researchers. Exploring the complex issues of data validation, integration, and representation, we present a systematic approach for the management and analysis of large biological data sets based on data warehouses. Our system has been implemented in the Bioverse, a framework combining diverse protein information from a variety of knowledge areas such as molecular interactions, pathway localization, protein structure, and protein function.

  6. A computer primer: systems implementation.

    PubMed

    Alleyne, J

    1982-07-01

    It is important to recognize the process of implementing systems as a process of change. The hospital, through its steering committee, must manage this process, initiating change instead of responding to it. Only then will the implementation of information systems be an orderly process and the impact of these changes on the hospital's organization clearly controlled. The probability of success in implementing new systems would likely be increased if attention centers on gaining commitment to the project, gaining commitment to any changes necessitated by the new system, and assuring that the project is well defined and plans clearly specified. These issues, if monitored throughout the systems implementation, will lead to early identification of potential problems and probable failures. This highly increases the chance of success. A probably failure, once identified, can be given specific attention to assure that associated problems are successfully resolved. The cost of this special attention, monitoring and managing systems implementation, is almost always much less than the cost of the eventual implementation failure. PMID:7106436

  7. Automated Operations Development for Advanced Exploration Systems

    NASA Technical Reports Server (NTRS)

    Haddock, Angie T.; Stetson, Howard

    2012-01-01

    Automated space operations command and control software development and its implementation must be an integral part of the vehicle design effort. The software design must encompass autonomous fault detection, isolation, recovery capabilities and also provide "single button" intelligent functions for the crew. Development, operations and safety approval experience with the Timeliner system onboard the International Space Station (ISS), which provided autonomous monitoring with response and single command functionality of payload systems, can be built upon for future automated operations as the ISS Payload effort was the first and only autonomous command and control system to be in continuous execution (6 years), 24 hours a day, 7 days a week within a crewed spacecraft environment. Utilizing proven capabilities from the ISS Higher Active Logic (HAL) System, along with the execution component design from within the HAL 9000 Space Operating System, this design paper will detail the initial HAL System software architecture and interfaces as applied to NASA's Habitat Demonstration Unit (HDU) in support of the Advanced Exploration Systems, Autonomous Mission Operations project. The development and implementation of integrated simulators within this development effort will also be detailed and is the first step in verifying the HAL 9000 Integrated Test-Bed Component [2] designs effectiveness. This design paper will conclude with a summary of the current development status and future development goals as it pertains to automated command and control for the HDU.

  8. Monitoring and diagnostics systems for nuclear power plant operating regimes

    SciTech Connect

    Abagyan, A.A.; Dmitriev, V.M.; Klebanov, L.A.; Kroshilin, A.E.; Larin, E.P.; Morozov, S.K.

    1988-05-01

    The development of new monitoring and diagnostics systems for Soviet reactors is discussed. An experimental test station is described where industrial operation of new experimental systems can be conducted for purposes of bringing their performance to the level of standard Soviet systems for monitoring reactor operation regimes and equipment resources. The requirements and parameters of the systems are described on a unit-by-unit basis, including the sensor reading monitoring unit, the vibroacoustic monitoring unit, the noise monitoring unit, the accident regime identification unit, and the nonstationary regime monitoring unit. Computer hardware and software requirements are discussed. The results of calculational and experimental research on two complex nonstationary regimes of reactor operation are given. The accident regimes identification unit for the VVER-1000 is analyzed in detail.

  9. Nova power systems: status and operating experience

    SciTech Connect

    Whitham, K.; Merritt, B.T.; Gritton, D.G.; Smart, A.J.; Holloway, R.W.; Oicles, J.A.

    1983-11-28

    This paper describes the pulse power systems that are used in these lasers; the status and the operating experiences. The pulsed power system for the Nova Laser is comprised of several distinct technology areas. The large capacitor banks for driving flashlamps that excite the laser glass is one area, the fast pulsers that drive pockels cell shutters is another area, and the contol system for the pulsed power is a third. This paper discusses the capacitor banks and control systems.

  10. Naturalistic Decision Making For Power System Operators

    SciTech Connect

    Greitzer, Frank L.; Podmore, Robin; Robinson, Marck; Ey, Pamela

    2009-06-23

    Abstract: Motivation -- As indicated by the Blackout of 2003, the North American interconnected electric system is vulnerable to cascading outages and widespread blackouts. Investigations of large scale outages often attribute the causes to the three T’s: Trees, Training and Tools. A systematic approach has been developed to document and understand the mental processes that an expert power system operator uses when making critical decisions. The approach has been developed and refined as part of a capability demonstration of a high-fidelity real-time power system simulator under normal and emergency conditions. To examine naturalistic decision making (NDM) processes, transcripts of operator-to-operator conversations are analyzed to reveal and assess NDM-based performance criteria. Findings/Design -- The results of the study indicate that we can map the Situation Awareness Level of the operators at each point in the scenario. We can also identify clearly what mental models and mental simulations are being performed at different points in the scenario. As a result of this research we expect that we can identify improved training methods and improved analytical and visualization tools for power system operators. Originality/Value -- The research applies for the first time, the concepts of Recognition Primed Decision Making, Situation Awareness Levels and Cognitive Task Analysis to training of electric power system operators. Take away message -- The NDM approach provides an ideal framework for systematic training management and mitigation to accelerate learning in team-based training scenarios with high-fidelity power grid simulators.

  11. The expanded role of computers in Space Station Freedom real-time operations

    NASA Technical Reports Server (NTRS)

    Crawford, R. Paul; Cannon, Kathleen V.

    1990-01-01

    The challenges that NASA and its international partners face in their real-time operation of the Space Station Freedom necessitate an increased role on the part of computers. In building the operational concepts concerning the role of the computer, the Space Station program is using lessons learned experience from past programs, knowledge of the needs of future space programs, and technical advances in the computer industry. The computer is expected to contribute most significantly in real-time operations by forming a versatile operating architecture, a responsive operations tool set, and an environment that promotes effective and efficient utilization of Space Station Freedom resources.

  12. An overview of energy efficiency techniques in cluster computing systems

    SciTech Connect

    Valentini, Giorgio Luigi; Lassonde, Walter; Khan, Samee Ullah; Min-Allah, Nasro; Madani, Sajjad A.; Li, Juan; Zhang, Limin; Wang, Lizhe; Ghani, Nasir; Kolodziej, Joanna; Li, Hongxiang; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal

    2011-09-10

    Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.

  13. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where

  14. Computer automation for feedback system design

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Mathematical techniques and explanations of various steps used by an automated computer program to design feedback systems are summarized. Special attention was given to refining the automatic evaluation suboptimal loop transmission and the translation of time to frequency domain specifications.

  15. Computer program for optical systems ray tracing

    NASA Technical Reports Server (NTRS)

    Ferguson, T. J.; Konn, H.

    1967-01-01

    Program traces rays of light through optical systems consisting of up to 65 different optical surfaces and computes the aberrations. For design purposes, paraxial tracings with astigmation and third order tracings are provided.

  16. Data systems and computer science programs: Overview

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  17. Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.

    2001-01-01

    The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.

  18. Satellite system considerations for computer data transfer

    NASA Technical Reports Server (NTRS)

    Cook, W. L.; Kaul, A. K.

    1975-01-01

    Communications satellites will play a key role in the transmission of computer generated data through nationwide networks. This paper examines critical aspects of satellite system design as they relate to the computer data transfer task. In addition, it discusses the factors influencing the choice of error control technique, modulation scheme, multiple-access mode, and satellite beam configuration based on an evaluation of system requirements for a broad range of application areas including telemetry, terminal dialog, and bulk data transmission.

  19. Interactive orbital proximity operations planning system

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1989-01-01

    An interactive, graphical proximity operations planning system was developed which allows on-site design of efficient, complex, multiburn maneuvers in the dynamic multispacecraft environment about the space station. Maneuvering takes place in, as well as out of, the orbital plane. The difficulty in planning such missions results from the unusual and counterintuitive character of relative orbital motion trajectories and complex operational constraints, which are both time varying and highly dependent on the mission scenario. This difficulty is greatly overcome by visualizing the relative trajectories and the relative constraints in an easily interpretable, graphical format, which provides the operator with immediate feedback on design actions. The display shows a perspective bird's-eye view of the space station and co-orbiting spacecraft on the background of the station's orbital plane. The operator has control over two modes of operation: (1) a viewing system mode, which enables him or her to explore the spatial situation about the space station and thus choose and frame in on areas of interest; and (2) a trajectory design mode, which allows the interactive editing of a series of way-points and maneuvering burns to obtain a trajectory which complies with all operational constraints. Through a graphical interactive process, the operator will continue to modify the trajectory design until all operational constraints are met. The effectiveness of this display format in complex trajectory design is presently being evaluated in an ongoing experimental program.

  20. Interactive orbital proximity operations planning system instruction and training guide

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1994-01-01

    This guide instructs users in the operation of a Proximity Operations Planning System. This system uses an interactive graphical method for planning fuel-efficient rendezvous trajectories in the multi-spacecraft environment of the space station and allows the operator to compose a multi-burn transfer trajectory between orbit initial chaser and target trajectories. The available task time (window) of the mission is predetermined and the maneuver is subject to various operational constraints, such as departure, arrival, spatial, plume impingement, and en route passage constraints. The maneuvers are described in terms of the relative motion experienced in a space station centered coordinate system. Both in-orbital plane as well as out-of-orbital plane maneuvering is considered. A number of visual optimization aids are used for assisting the operator in reaching fuel-efficient solutions. These optimization aids are based on the Primer Vector theory. The visual feedback of trajectory shapes, operational constraints, and optimization functions, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool is an example of operator-assisted optimization of nonlinear cost functions.

  1. Determining the optimal operator allocation in SME's food manufacturing company using computer simulation and data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Rahman, Asmahanim Ab

    2014-09-01

    In a labor intensive manufacturing system, optimal operator allocation is one of the most important decisions in determining the efficiency of the system. In this paper, ten operator allocation alternatives are identified using the computer simulation ARENA. Two inputs; average wait time and average cycle time and two outputs; average operator utilization and total packet values of each alternative are generated. Four Data Envelopment Analysis (DEA) models; CCR, BCC, MCDEA and AHP/DEA are used to determine the optimal operator allocation at one of the SME food manufacturing companies in Selangor. The results of all four DEA models showed that the optimal operator allocation is six operators at peeling process, three operators at washing and slicing process, three operators at frying process and two operators at packaging process.

  2. Job Specifications for the Computer Productions Operations and Skill-Related Data Processing Job Cluster

    ERIC Educational Resources Information Center

    Johnson, Mildred Fitzgerald

    1978-01-01

    Diagrams the levels of specialization in electronic data processing and provides a job description for the first level, computer production operations job cluster. Data collected from computer operations managers, incumbent operators, and interviews and observations were analyzed for the job skills and educational and employment qualifications…

  3. Sustaining Operational Efficiency of a CHP System

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2010-01-04

    This chapter provides background information on why sustaining operations of combined cooling, heating and power systems is important, provides the algorithms for CHP system performance monitoring and commissioning verification, and concludes with a discussion on how these algorithms can be deployed.

  4. Current and Future Flight Operating Systems

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan

    2007-01-01

    This viewgraph presentation reviews the current real time operating system (RTOS) type in use with current flight systems. A new RTOS model is described, i.e. the process model. Included is a review of the challenges of migrating from the classic RTOS to the Process Model type.

  5. Operational Considerations when Designing New Ground Systems

    NASA Technical Reports Server (NTRS)

    Walyus, Keith; Barbahenn, George; Crabb, William; Miebach, Manfred; Pataro, Peter

    2000-01-01

    The Hubble Space Telescope (HST) launched in April of 1991 with a nominal 15-year old mission. Since then, the HST mission life has been extended to 2010. As is true for all NASA missions, HST is being asked to decrease its operational costs for the remainder of its mission life. Various techniques are being incorporated for cost reductions, with one of the core means being the design of a new and more efficient ground system for HST operations. This new ground system, "Vision 2000", will reduce operational and maintenance costs and also provide the HST Project with added flexibility to react to future changes. Vision 2000 began supporting HST Operations in January of 1999 and will support the mission for the remainder of the mission life. Upgrading a satellite's ground system is a popular approach for reducing costs, but it is also inherently risky. Validating a new ground system can be a severe distraction to a flight team while operating a satellite. Mission data collection and health and safety requirements are rarely, if ever, relaxed during this validation period, forcing flight teams to undertake an additional task while operating the satellite. Additionally, flight teams must usually undergo extensive training to effectively utilize the new system. Once again, this training usually occurs as an additional task, in addition to the nominal satellite operations. While operating the spacecraft, the Flight Team typically assists in the design, validation, and verification of a new ground system. This is a distraction and strain on the Flight Team, but the benefit of using the Flight Team in all phases of ground system development far outweigh the negative aspects. Finally, above the cost of the new system, the integration into the facility with the current control center system are resources and costs not normally taken into account in the design phase of the new system. In addition to the standard issues faced by a Project when upgrading its ground system, the

  6. The Launch Systems Operations Cost Model

    NASA Technical Reports Server (NTRS)

    Prince, Frank A.; Hamaker, Joseph W. (Technical Monitor)

    2001-01-01

    One of NASA's primary missions is to reduce the cost of access to space while simultaneously increasing safety. A key component, and one of the least understood, is the recurring operations and support cost for reusable launch systems. In order to predict these costs, NASA, under the leadership of the Independent Program Assessment Office (IPAO), has commissioned the development of a Launch Systems Operations Cost Model (LSOCM). LSOCM is a tool to predict the operations & support (O&S) cost of new and modified reusable (and partially reusable) launch systems. The requirements are to predict the non-recurring cost for the ground infrastructure and the recurring cost of maintaining that infrastructure, performing vehicle logistics, and performing the O&S actions to return the vehicle to flight. In addition, the model must estimate the time required to cycle the vehicle through all of the ground processing activities. The current version of LSOCM is an amalgamation of existing tools, leveraging our understanding of shuttle operations cost with a means of predicting how the maintenance burden will change as the vehicle becomes more aircraft like. The use of the Conceptual Operations Manpower Estimating Tool/Operations Cost Model (COMET/OCM) provides a solid point of departure based on shuttle and expendable launch vehicle (ELV) experience. The incorporation of the Reliability and Maintainability Analysis Tool (RMAT) as expressed by a set of response surface model equations gives a method for estimating how changing launch system characteristics affects cost and cycle time as compared to today's shuttle system. Plans are being made to improve the model. The development team will be spending the next few months devising a structured methodology that will enable verified and validated algorithms to give accurate cost estimates. To assist in this endeavor the LSOCM team is part of an Agency wide effort to combine resources with other cost and operations professionals to

  7. Recursive dynamics for flexible multibody systems using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1990-01-01

    Due to their structural flexibility, spacecraft and space manipulators are multibody systems with complex dynamics and possess a large number of degrees of freedom. Here the spatial operator algebra methodology is used to develop a new dynamics formulation and spatially recursive algorithms for such flexible multibody systems. A key feature of the formulation is that the operator description of the flexible system dynamics is identical in form to the corresponding operator description of the dynamics of rigid multibody systems. A significant advantage of this unifying approach is that it allows ideas and techniques for rigid multibody systems to be easily applied to flexible multibody systems. The algorithms use standard finite-element and assumed modes models for the individual body deformation. A Newton-Euler Operator Factorization of the mass matrix of the multibody system is first developed. It forms the basis for recursive algorithms such as for the inverse dynamics, the computation of the mass matrix, and the composite body forward dynamics for the system. Subsequently, an alternative Innovations Operator Factorization of the mass matrix, each of whose factors is invertible, is developed. It leads to an operator expression for the inverse of the mass matrix, and forms the basis for the recursive articulated body forward dynamics algorithm for the flexible multibody system. For simplicity, most of the development here focuses on serial chain multibody systems. However, extensions of the algorithms to general topology flexible multibody systems are described. While the computational cost of the algorithms depends on factors such as the topology and the amount of flexibility in the multibody system, in general, it appears that in contrast to the rigid multibody case, the articulated body forward dynamics algorithm is the more efficient algorithm for flexible multibody systems containing even a small number of flexible bodies. The variety of algorithms described

  8. Cognitive context detection in UAS operators using eye-gaze patterns on computer screens

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.

  9. Computer-Aided dispatching system design specification

    SciTech Connect

    Briggs, M.G.

    1996-05-03

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility.

  10. Smart Operations in Distributed Energy Resources System

    NASA Astrophysics Data System (ADS)

    Wei, Li; Jie, Shu; Zhang-XianYong; Qing, Zhou

    Smart grid capabilities are being proposed to help solve the challenges concerning system operations due to that the trade-offs between energy and environmental needs will be constantly negotiated while a reliable supply of electricity needs even greater assurance in case of that threats of disruption have risen. This paper mainly explores models for distributed energy resources system (DG, storage, and load),and also reviews the evolving nature of electricity markets to deal with this complexity and a change of emphasis on signals from these markets to affect power system control. Smart grid capabilities will also impact reliable operations, while cyber security issues must be solved as a culture change that influences all system design, implementation, and maintenance. Lastly, the paper explores significant questions for further research and the need for a simulation environment that supports such investigation and informs deployments to mitigate operational issues as they arise.

  11. The Initial Development of a Computerized Operator Support System

    SciTech Connect

    Roger Lew; Ronald L Boring; Thomas A Ulrich; Ken Thomas

    2014-08-01

    A computerized operator support system (COSS) is a collection of resilient software technologies to assist operators in monitoring overall nuclear power plant performance and making timely, informed decisions on appropriate control actions for the projected plant condition. The COSS provides rapid assessments, computations, and recommendations to reduce workload and augment operator judgment and decision-making during fast- moving, complex events. A prototype COSS for a chemical volume control system at a nuclear power plant has been developed in order to demonstrate the concept and provide a test bed for further research. The development process identified four underlying elements necessary for the prototype, which consist of a digital alarm system, computer-based procedures, piping and instrumentation diagram system representations, and a recommender module for mitigation actions. An operational prototype resides at the Idaho National Laboratory (INL) using the U.S. Department of Energy’s (DOE) Light Water Reactor Sustainability (LWRS) Human Systems Simulation Laboratory (HSSL). Several human-machine interface (HMI) considerations are identified and incorporated in the prototype during this initial round of development.

  12. Computational approaches for systems metabolomics.

    PubMed

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field.

  13. Computer analyses for the design, operation and safety of new isotope production reactors: A technology status review

    SciTech Connect

    Wulff, W.

    1990-01-01

    A review is presented on the currently available technologies for nuclear reactor analyses by computer. The important distinction is made between traditional computer calculation and advanced computer simulation. Simulation needs are defined to support the design, operation, maintenance and safety of isotope production reactors. Existing methods of computer analyses are categorized in accordance with the type of computer involved in their execution: micro, mini, mainframe and supercomputers. Both general and special-purpose computers are discussed. Major computer codes are described, with regard for their use in analyzing isotope production reactors. It has been determined in this review that conventional systems codes (TRAC, RELAP5, RETRAN, etc.) cannot meet four essential conditions for viable reactor simulation: simulation fidelity, on-line interactive operation with convenient graphics, high simulation speed, and at low cost. These conditions can be met by special-purpose computers (such as the AD100 of ADI), which are specifically designed for high-speed simulation of complex systems. The greatest shortcoming of existing systems codes (TRAC, RELAP5) is their mismatch between very high computational efforts and low simulation fidelity. The drift flux formulation (HIPA) is the viable alternative to the complicated two-fluid model. No existing computer code has the capability of accommodating all important processes in the core geometry of isotope production reactors. Experiments are needed (heat transfer measurements) to provide necessary correlations. It is important for the nuclear community, both in government, industry and universities, to begin to take advantage of modern simulation technologies and equipment. 41 refs.

  14. Expert system support for HST operations

    NASA Technical Reports Server (NTRS)

    Cruse, Bryant; Wende, Charles

    1987-01-01

    An expert system is being developed to support vehicle anomaly diagnosis for the Hubble Space Telescope (HST). Following a study of safemode entry analyses, a prototype system was developed which reads engineering telemetry formats, and when a safemode event is detected, extracts telemetry from the downlink and writes it into a knowledge base for more detailed analyses. The prototype then summarizes vehicle events (limits exceeded, specific failures). This prototype, the Telemetry Analysis Logic for Operations Support (TALOS) uses the Lockheed Expert System (LES) shell, and includes over 1600 facts, 230 rules, and 27 goals. Although considered a prototype, it is already an operationally useful system. The history leading into the TALOS prototype will be discussed, an overview of the present TALOS system will be presented, and the role of the TALOS system in contingency planning will be delineated.

  15. Computational systems biology for aging research.

    PubMed

    Mc Auley, Mark T; Mooney, Kathleen M

    2015-01-01

    Computational modelling is a key component of systems biology and integrates with the other techniques discussed thus far in this book by utilizing a myriad of data that are being generated to quantitatively represent and simulate biological systems. This chapter will describe what computational modelling involves; the rationale for using it, and the appropriateness of modelling for investigating the aging process. How a model is assembled and the different theoretical frameworks that can be used to build a model are also discussed. In addition, the chapter will describe several models which demonstrate the effectiveness of each computational approach for investigating the constituents of a healthy aging trajectory. Specifically, a number of models will be showcased which focus on the complex age-related disorders associated with unhealthy aging. To conclude, we discuss the future applications of computational systems modelling to aging research.

  16. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    SciTech Connect

    Frankel, R.S.

    1995-12-31

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation.

  17. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  18. Laboratory Information Systems Management and Operations.

    PubMed

    Cucoranu, Ioan C

    2015-06-01

    The main mission of a laboratory information system (LIS) is to manage workflow and deliver accurate results for clinical management. Successful selection and implementation of an anatomic pathology LIS is not complete unless it is complemented by specialized information technology support and maintenance. LIS is required to remain continuously operational with minimal or no downtime and the LIS team has to ensure that all operations are compliant with the mandated rules and regulations.

  19. Laboratory Information Systems Management and Operations.

    PubMed

    Cucoranu, Ioan C

    2015-06-01

    The main mission of a laboratory information system (LIS) is to manage workflow and deliver accurate results for clinical management. Successful selection and implementation of an anatomic pathology LIS is not complete unless it is complemented by specialized information technology support and maintenance. LIS is required to remain continuously operational with minimal or no downtime and the LIS team has to ensure that all operations are compliant with the mandated rules and regulations. PMID:26065790

  20. Laboratory Information Systems Management and Operations.

    PubMed

    Cucoranu, Ioan C

    2016-03-01

    The main mission of a laboratory information system (LIS) is to manage workflow and deliver accurate results for clinical management. Successful selection and implementation of an anatomic pathology LIS is not complete unless it is complemented by specialized information technology support and maintenance. LIS is required to remain continuously operational with minimal or no downtime and the LIS team has to ensure that all operations are compliant with the mandated rules and regulations. PMID:26851664

  1. Commerical solar water heating systems operational test

    NASA Astrophysics Data System (ADS)

    Guinn, G. R.; Novell, B. J.; Hummer, L. L.

    The performance of six commercially available solar water heaters is evaluated. The six systems are installed side-by-side on a typical roof structure and provide two examples each of silicone oil, antifreeze, and drain-back freeze protection. Each system is instrumented with Btu and KWH meters to assess performance under an imposed load profile. The systems, the instrumentation, operational results acquired over a 19 month interval, and performance over a 4 month interval are described.

  2. HP-UX: implementation of UNIX on the HP 900 series 500 computer systems

    SciTech Connect

    Wang, S.W.Y.; Lindberg, J.B.; Hetrick, M.V.; Connor, M.L.

    1984-03-01

    An implementation of the UNIX operating system kernel has been layered on top of an existing operating system kernel (SUN) for the HP 9000 series 500 computer systems. The mapping of UNIX functional requirements onto the capabilities of the underlying operating system is discussed in the article, along with the implementation of UNIX commands and libraries. These pieces of UNIX, along with other extensions added by HP, make up the HP-UX operating system.

  3. Space transportation system biomedical operations support study

    NASA Technical Reports Server (NTRS)

    White, S. C.

    1983-01-01

    The shift of the Space Transportation System (STS) flight tests of the orbiter vehicle to the preparation and flight of the payloads is discussed. Part of this change is the transition of the medical and life sciences aspects of the STS flight operations to reflect the new state. The medical operations, the life sciences flight experiments support requirements and the intramural research program expected to be at KSC during the operational flight period of the STS and a future space station are analyzed. The adequacy of available facilities, plans, and resources against these future needs are compared; revisions and/or alternatives where appropriate are proposed.

  4. Interactive orbital proximity operations planning system

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1988-01-01

    An interactive graphical proximity operations planning system was developed, which allows on-site design of efficient, complex, multiburn maneuvers in a dynamic multispacecraft environment. Maneuvering takes place in and out of the orbital plane. The difficulty in planning such missions results from the unusual and counterintuitive character of orbital dynamics and complex time-varying operational constraints. This difficulty is greatly overcome by visualizing the relative trajectories and the relevant constraints in an easily interpretable graphical format, which provides the operator with immediate feedback on design actions. The display shows a perspective bird's-eye view of a Space Station and co-orbiting spacecraft on the background of the Station's orbital plane. The operator has control over the two modes of operation: a viewing system mode, which enables the exporation of the spatial situation about the Space Station and thus the ability to choose and zoom in on areas of interest; and a trajectory design mode, which allows the interactive editing of a series of way points and maneuvering burns to obtain a trajectory that complies with all operational constraints. A first version of this display was completed. An experimental program is planned in which operators will carry out a series of design missions which vary in complexity and constraints.

  5. Developing operating principles for systems change.

    PubMed

    Behrens, Teresa R; Foster-Fishman, Pennie G

    2007-06-01

    Based on an analysis of the articles in this special issue, the authors propose five operating principles for systems change work. These principles are: clarifying the purpose of the systems change; identifying whether the change is one to an existing system or the change is to create a new system; conceptualize the work as systems change from the beginning; use an eclectic approach; and be open to opportunities that emerge while also undertaking forma analysis to identify leverage points. The authors argue that the time is now ripe to develop such principles and encourage community change agents to engage in a dialogue to explore, revise, eliminate or expand on these principles. PMID:17431758

  6. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  7. [Controlling systems for operating room managers].

    PubMed

    Schüpfer, G; Bauer, M; Scherzinger, B; Schleppers, A

    2005-08-01

    Management means developing, shaping and controlling of complex, productive and social systems. Therefore, operating room managers also need to develop basic skills in financial and managerial accounting as a basis for operative and strategic controlling which is an essential part of their work. A good measurement system should include financial and strategic concepts for market position, innovation performance, productivity, attractiveness, liquidity/cash flow and profitability. Since hospitals need to implement a strategy to reach their business objectives, the performance measurement system has to be individually adapted to the strategy of the hospital. In this respect the navigation system developed by Gälweiler is compared to the "balanced score card" system of Kaplan and Norton. PMID:15959742

  8. Operational development of small plant growth systems

    NASA Technical Reports Server (NTRS)

    Scheld, H. W.; Magnuson, J. W.; Sauer, R. L.

    1986-01-01

    The results of a study undertaken on the first phase of an empricial effort in the development of small plant growth chambers for production of salad type vegetables on space shuttle or space station are discussed. The overall effort is visualized as providing the underpinning of practical experience in handling of plant systems in space which will provide major support for future efforts in planning, design, and construction of plant-based (phytomechanical) systems for support of human habitation in space. The assumptions underlying the effort hold that large scale phytomechanical habitability support systems for future space stations must evolve from the simple to the complex. The highly complex final systems will be developed from the accumulated experience and data gathered from repetitive tests and trials of fragments or subsystems of the whole in an operational mode. These developing system components will, meanwhile, serve a useful operational function in providing psychological support and diversion for the crews.

  9. Computational studies of polymeric systems

    NASA Astrophysics Data System (ADS)

    Carrillo, Jan-Michael Y.

    Polymeric systems involving polyelectrolytes in surfaces and interfaces, semiflexible polyelectrolytes and biopolymers in solution, complex polymeric systems that had applications in nanotechnology were modeled using coarse grained molecular dynamics simulation. In the area of polyelectrolytes in surfaces and interfaces, the phenomena of polyelectrolyte adsorption at oppositely charge surface was investigated. Simulations found that short range van der Waals interaction was a major factor in determining morphology and thickness of the adsorbed layer. Hydrophobic polyelectrolytes adsorbed in hydrophobic surfaces tend to be the most effective in forming multi-layers because short range attraction enhances the adsorption process. Adsorbed polyelectrolytes could move freely along the surface which was in contrast to polyelectrolyte brushes. The morphologies of hydrophobic polyelectrolyte brushes were investigated and simulations found that brushes had different morphologies depending on the strength of the short range monomer-monomer attraction, electrostatic interaction and counterion condensation. Planar polyelectrolyte brushes formed: (1) vertically oriented cylindrical aggregates, (2) maze-like aggregate structures, or (3) thin polymeric layer covering a substrate. While, the spherical polyelectrolyte brushes could be in any of the previous morphologies or be in a micelle-like conformation with a dense core and charged corona. In the area of biopolymers and semiflexible polyelectrolytes in solution, simulations demonstrated that the bending rigidity of these polymers was scale-dependent. The bond-bond correlation function describing a chain's orientational memory could be approximated by a sum of two exponential functions manifesting the existence of the two characteristic length scales. The existence of the two length scales challenged the current practice of describing chain stretching experiments using a single length scale. In the field of nanotechnology

  10. Realistic modeling of clinical laboratory operation by computer simulation.

    PubMed

    Vogt, W; Braun, S L; Hanssmann, F; Liebl, F; Berchtold, G; Blaschke, H; Eckert, M; Hoffmann, G E; Klose, S

    1994-06-01

    An important objective of laboratory management is to adjust the laboratory's capability to the needs of patients' care as well as economy. The consequences of management may be changes in laboratory organization, equipment, or personnel planning. At present only one's individual experience can be used for making such decisions. We have investigated whether the techniques of operations research could be transferred to a clinical laboratory and whether an adequate simulation model of the laboratory could be realized. First we listed and documented the system design and the process flow for each single laboratory request. These input data were linked by the simulation model (programming language SIMSCRIPT II.5). The output data (turnaround times, utilization rates, and analysis of queue length) were validated by comparison with the current performance data obtained by tracking specimen flow. Congruence of the data was excellent (within +/- 4%). In planning experiments we could study the consequences of changes in order entry, staffing, and equipment on turnaround times, utilization, and queue lengths. We conclude that simulation can be a valuable tool for better management decisions.

  11. Lax-Nijenhuis operators for integrable systems

    NASA Astrophysics Data System (ADS)

    Kosmann-Schwarzbach, Y.; Magri, F.

    1996-12-01

    The relationship between Lax and bi-Hamiltonian formulations of dynamical systems on finite- or infinite-dimensional phase spaces is investigated. The Lax-Nijenhuis equation is introduced and it is shown that every operator that satisfies that equation satisfies the Lenard recursion relations, while the converse holds for an operator with a simple spectrum. Explicit higher-order Hamiltonian structures for the Toda system, a second Hamiltonian structure of the Euler equation for a rigid body in n-dimensional space, and the quadratic Adler-Gelfand-Dickey structure for the KdV hierarchy are derived using the Lax-Nijenhuis equation.

  12. A development environment for operational concepts and systems engineering analysis.

    SciTech Connect

    Raybourn, Elaine Marie; Senglaub, Michael E.

    2004-03-01

    The work reported in this document involves a development effort to provide combat commanders and systems engineers with a capability to explore and optimize system concepts that include operational concepts as part of the design effort. An infrastructure and analytic framework has been designed and partially developed that meets a gap in systems engineering design for combat related complex systems. The system consists of three major components: The first component consists of a design environment that permits the combat commander to perform 'what-if' types of analyses in which parts of a course of action (COA) can be automated by generic system constructs. The second component consists of suites of optimization tools designed to integrate into the analytical architecture to explore the massive design space of an integrated design and operational space. These optimization tools have been selected for their utility in requirements development and operational concept development. The third component involves the design of a modeling paradigm for the complex system that takes advantage of functional definitions and the coupled state space representations, generic measures of effectiveness and performance, and a number of modeling constructs to maximize the efficiency of computer simulations. The system architecture has been developed to allow for a future extension in which the operational concept development aspects can be performed in a co-evolutionary process to ensure the most robust designs may be gleaned from the design space(s).

  13. Displacement measurement system for inverters using computer micro-vision

    NASA Astrophysics Data System (ADS)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  14. DOE-MACSYMA. Computer Algebra System

    SciTech Connect

    Schelter, W.F.

    1990-02-01

    DOE-MACSYMA (Project MAC`s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions.Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.

  15. DOE-MACSYMA. Computer Algebra System

    SciTech Connect

    Cook, G.

    1987-08-01

    DOE-MACSYMA (Project MAC`s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. NIL (a New Implementation of Lisp) provides the environment for MACSYMA`s development and use on the DEC VAX11 under VMS.

  16. DOE-MACSYMA. Computer Algebra System

    SciTech Connect

    O`Dell, J.E.

    1987-07-01

    DOE-MACSYMA (Project MAC`s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franz Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX, SUN(OPUS) versions under UNIX and the Alliant version under Concentrix.

  17. DOE-MACSYMA. Computer Algebra System

    SciTech Connect

    Harten, L.

    1988-01-01

    DOE-MACSYMA (Project MAC`s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franz Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX, SUN(OPUS) versions under UNIX and the Alliant version under Concentrix.

  18. DOE-MACSYMA. Computer Algebra System

    SciTech Connect

    Palka, D.M.

    1987-11-01

    DOE-MACSYMA (Project MAC`s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franz Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX, SUN(OPUS) versions under UNIX and the Alliant version under Concentrix.

  19. DOE-MACSYMA. Computer Algebra System

    SciTech Connect

    Lancaster, D.; Golan, D.

    1990-11-01

    DOE-MACSYMA (Project MAC`s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL), Convex, and IBM PC under UNIX and Data General under AOS/VS.

  20. Computer support for cooperative tasks in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Moore, Mike

    1994-01-01

    Traditionally, spacecraft management has been performed by fixed teams of operators in Mission Operations Centers. The team cooperatively: (1) ensures that payload(s) on spacecraft perform their work; and (2) maintains the health and safety of the spacecraft through commanding and monitoring the spacecraft's subsystems. In the future, the task demands will increase and overload the operators. This paper describes the traditional spacecraft management environment and describes a new concept in which groupware will be used to create a Virtual Mission Operations Center. Groupware tools will be used to better utilize available resources through increased automation and dynamic sharing of personnel among missions.